As the programming model du jour for HPC compute clusters, MPI has many limitations in terms of scalability and fault tolerance. With these requirements in mind, researchers at the Fraunhofer Institute have developed a new programming interface called GPI that uses the parallel architecture of high-performance computers with maximum efficiency.
I was trying to solve a calculation and simulation problem related to seismic data,” says Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. “But existing methods weren’t working. The problems were a lack of scalability, the restriction to bulk-synchronous, two-sided communication, and the lack of fault tolerance. So out of my own curiosity I began to develop a new programming model.”
Read the Full Story.
- Webcast: Case Studies in Asynchronous, Message-Driven Shared-Memory Programming, Feb. 24
- What is data-parallel programming?
- Parallel programming isn’t ever going to be easy
The post Is GPI the Programming Tool for the Future of HPC? appeared first on insideHPC.