Linux.com

Pedraforca Cluster to be First to combine ARM CPUs, GPUs, and InfiniBand

Today the Barcelona Supercomputing Center (BSC) announced plans to deploy its next-gen ARM prototype cluster for HPC in July. Powered by ARM Cortex-A9, Nvidia Tesla K20 GPUs, and Mellanox QDR InfiniBand, the hybrid supercomputer will be named Pedraforca. By using InfiniBand, Pedraforca enables direct GPU-to-GPU communication through RDMA on ARM. It features a low-power Nvidia Tegra® 3 (4-core Cortex-A9) to run the operating system and drive both the Tesla K20 accelerator and the QDR InfiniBand at the minimum power consumption.

Read more... Comment (0)
 

IBM Packs 128TB of Flash Into Brain-Simulating Supercomputer

To accommodate all the data needed to model the 70 million neurons that make up a mouse brain, Big Blue is using scads of the same type of memory used for PC solid-state drives. [Read more]    ...

Read more... Comment (0)
 

Is GPI the Programming Tool for the Future of HPC?

As the programming model du jour for HPC compute clusters, MPI has many limitations in terms of scalability and fault tolerance. With these requirements in mind, researchers at the Fraunhofer Institute have developed a new programming interface called GPI that uses the parallel architecture of high-performance computers with maximum efficiency. I was trying to solve a calculation and simulation problem related to seismic data,” says Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM.

Read more... Comment (0)
 

Video: WSJ Looks at Why China Wants to be Number 1 in Supercomputing

In this video, Bob Davis from the the Wall Street Journal tells Mariko Sanchanta why China believes it’s so important to be number one in supercomputing. The story stems from recent reports that the Tianhe-2 supercomputer will soon be crowned as the fast machine on earth in the next TOP500...

Read more... Comment (0)
 

Tutorial on Scaling to Petaflops with Intel Xeon Phi

Over at Dr. Dobbs, Rob Farber has posted a tutorial on using MPI to tie together thousands of Intel Xeon Phi coprocessors. Farber uses his MPI code example on the Stampede supercomputer at TACC, achieving a remarkable 2.2 Petaflops of performance when running on 3000 nodes. Observed scaling to 3000 nodes of the TACC Stampede supercomputer. This article demonstrates how to utilize Intel Xeon Phi coprocessors to evaluate a single objective function across a computational cluster using...

Read more... Comment (0)
 
Page 18 of 84
Become an Individual Member
Check out the Friday Funnies

Sign Up For the Linux.com Newsletter


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board