Home News Enterprise Computing High Performance High-Performance New 'Real-World' Benchmark Could Shake Up Top500 Supercomputer List

New 'Real-World' Benchmark Could Shake Up Top500 Supercomputer List

The 42nd edition of the TOP500 list of supercomputers has been released, featuring the most powerful Linux machines in the world. Leading the pack again is Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, with a performance of 33.86 petaflop/s on the Linpack benchmark. Tianhe-2 uses the Kylin Linux operating system (OS).

The newest supercomputer to make the Top 10, Piz Daint, is also the most energy efficient system in the Top 10, consuming a total of 2.33 MW and delivering 2.7 Gflops/W. The number two computer, Titan is also one of the most energy efficient systems, consuming a total of 8.21 MW and delivering 2.143 Gflops/W. (See the top 10 list, below.)

Piz Daint Destra MR 01-2

Supercomputing speed test change-up announced

The next round of supercomputer rankings may shake things up however, as the Top500 editors have announced a new testing regime. On Nov. 18 the organizers released the High Performance Conjugate Gradient (HPCG) that is designed to better predict a supercomputer’s real-world usefulness.

In a June paper on the HPCG Benchmark, Top500 list editors Michael Heroux and Jack Dongarra say that the High Performance Linpack (HPL) test is “increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications.” The problem, according to Heroux and Dongarra is that designing for good HPL performance can “lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system.” High performance computing applications that are governed by differential equations, which tend to need more bandwidth and low latency and access data using irregular patterns are specifically not well served by the HPL design standards, according to the authors.

The new HPCG test won’t show real change to the list rankings too quickly though, as the test will need to be run and be accepted by the supercomputing community first.

“Once the definition and code for the HPCG is in a stable condition we envision collecting results for it in parallel to the ongoing effort for the HPL benchmark,” said Erich Strohmaier, head of the Future Technologies Group at Lawrence Berkeley National Laboratories and editor. “For the foreseeable future the TOP500 will be based on the HPL benchmark test but we would hope to provide additional value and information by collecting and publishing numbers for new benchmark such as HPCG as well.”

Experts: GPU accelerator speed feats fail real-world application

One aspect of the supercomputer design that leads to higher data processing under the HPL test are the GPU accelerators that are found in all of the top 10 supercomputers. These accelerators, such as the just-announced NVIDIA Tesla K40 GPU accelerator boost the performance of the top supercomputers, moving workloads around and helping crunch data at incredible speeds in the Linpack tests.

"GPU accelerators have gone mainstream in the HPC and supercomputing industries," said Sumit Gupta, general manager of Tesla Accelerated Computing products at Nvidia.

In their June paper on HPCG, Dongarra and Heroux point out that the way these accelerators work doesn’t always reflect real-world applications that would more selectively port data to the accelerators and rely on CPU processing, with the result of slower computation.

"For example, the Titan system at Oak Ridge National Laboratory has 18,688 nodes, each with a 16-core, 32GB AMD Opteron processor and a 6GB Nvidia K20 GPU. Titan was the top-ranked system in November 2012 using HPL [Linpack]. However, in obtaining the HPL result on Titan, the Opteron processors played only a supporting role in the result. All floating-point computation and all data were resident on the GPUs. In contrast, real applications, when initially ported to Titan, will typically run solely on the CPUs and selectively offload computations to the GPU for acceleration."

The complete Top500 Supercomputer list for November 2013 is available from

The November 2013 Top 10 supercomputers are:

1. Tianhe-2, developed by China’s National University of Defense Technology - 33.86 petaflop/s - Kylin Linux operating system (OS)
2. Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory - 17.59 Pflop/s – Cray Linux Environment

3. Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory - 17.17 Pflop/s - Linux

4. K computer, a Fujitsu system installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan - 10.51 Pflop/s - Linux

5. Mira, a BlueGene/Q system installed at DOE’s Argonne National Laboratory - 8.59 Pflop/s - Linux

6. Piz Daint, a Cray XC30 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland - 6.27 Pflop/s – Cray Linux Environment

7. Stampede, a Dell at the Texas Advanced Computing Center of the University of Texas, Austin – 5.17 Pflop/s - Linux

8. JUQEEN, a BlueGene/Q system installed at the Forschungszentrum Juelich in Germany – 5.01 Pflops/s - Linux

9. Vulcan, an IBM BlueGene/Q system at Lawrence Livermore National Laboratory – 4.29 Pflop/s - Linux

10. SuperMUC, at Leibniz Rechenzentrum in Germany – 2.90 Pflop/s - Linux



Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFD211 Introduction to Linux for Developers
    08 Dec » 09 Dec - Virtual
  2. LFS220 Linux System Administration
    08 Dec » 11 Dec - Virtual
  3. LFS520 OpenStack Cloud Architecture and Deployment
    08 Dec » 11 Dec - Virtual

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Newsletter

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board