Home News Enterprise Computing High Performance

Is GPI the Programming Tool for the Future of HPC?

As the programming model du jour for HPC compute clusters, MPI has many limitations in terms of scalability and fault tolerance. With these requirements in mind, researchers at the Fraunhofer Institute have developed a new programming interface called GPI that uses the parallel architecture of high-performance computers with maximum efficiency. I was trying to solve a calculation and simulation problem related to seismic data,” says Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM.

Read more... Comment (0)

Video: WSJ Looks at Why China Wants to be Number 1 in Supercomputing

In this video, Bob Davis from the the Wall Street Journal tells Mariko Sanchanta why China believes it’s so important to be number one in supercomputing. The story stems from recent reports that the Tianhe-2 supercomputer will soon be crowned as the fast machine on earth in the next TOP500...

Read more... Comment (0)

Tutorial on Scaling to Petaflops with Intel Xeon Phi

Over at Dr. Dobbs, Rob Farber has posted a tutorial on using MPI to tie together thousands of Intel Xeon Phi coprocessors. Farber uses his MPI code example on the Stampede supercomputer at TACC, achieving a remarkable 2.2 Petaflops of performance when running on 3000 nodes. Observed scaling to 3000 nodes of the TACC Stampede supercomputer. This article demonstrates how to utilize Intel Xeon Phi coprocessors to evaluate a single objective function across a computational cluster using...

Read more... Comment (0)

Jeff Layton on Getting Started with HPC Clusters

Generic Cluster Layout Over at Admin Magazine, Dell’s Jeff Layton has written a wide-ranging primer on Getting Started with HPC Clusters. And while he covers the bases with traditional beowulf topics, Layton says that using virtualization is a great path to understanding. One of the quickest and easiest ways to really get started is to use virtualization, which is sort of the...

Read more... Comment (0)

Smidge Enables Parallel Processing Just by Pointing to a Web Site

Imagine a distributed supercomputer code that doesn’t need to be installed, but rather lets you pool the processing power of potentially thousands of machines just by pointing them to a single website. Called Smidge, the code is kind of ad hoc supercomputer built with JavaScript, the standard programming language of the web. We were able to scale it across every device...

Read more... Comment (0)
Page 15 of 51

Upcoming Linux Foundation Courses

  1. LFD312 Developing Applications For Linux
    16 Feb » 20 Feb - Atlanta - GA
  2. LFD331 Developing Linux Device Drivers
    16 Feb » 20 Feb - San Jose - CA
  3. LFS220 Linux System Administration
    16 Feb » 19 Feb - Virtual

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Newsletter

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board