Reality Check: Supercomputers Still Rule… and Linux Still Rules Them

41

Editor’s Note: This is part of a series by SUSE community marketing manager Brian Proffitt for Linux.com called “Reality Check” that looks at Linux in the real world.

Brian-ProffittEarlier this summer, The Linux Foundation released the report 20 years of Top500.org, which marked the 20th anniversary of the Top 500 supercomputer ranking, and (quite naturally) highlighted Linux’s dominance on systems within the Top 500 over time.

From the very beginning of my time with Linux, the notion of Linux’ scalability up to high performance architecture has stuck with me. I used to tell people when I explained Linux that it was the one operating system that could run a wristwatch or a supercomputer.

But with the advent of cloud and virtual data center computing, are the days of supercomputers approaching an end, making them nothing more than trophies for universities and nations to show off when they have their top-ranked systems running?

Supercomputers tend to do well in scenarios where a lot of data-processing has to be done in a very calculation-intensive and iterative way. Modeling weather data, chemical and biological interactions, geological data… these are all standard fare in the typical supercomputer’s diet.

But given the rise of clustered computers, is supercomputing even worth it anymore? After all, supercomputers are not easy to build and tend to need a lot of resources (like power) to operate. Couldn’t cloud computing or even a cluster of Hadoop systems do the same thing for a lot less hassle?

It depends on the problem. For the iterative calculations described above, the input of one step depends on the output of steps that have come before… and there are many, many steps to be taken.

In this case, it can make more sense for the data to be near the supercomputer processors, all in one place on one machine, as opposed to being spread out amongst distributed systems. Moving the data back and forth to the processing machine(s) would take a very long time and therefore would be very inefficient.

With Hadoop, though, the issue of moving data around is basically solved… because the nature of a Hadoop cluster means that the data is scaled out to reside on the machines where processing is going on.

But Hadoop has limitations, not the least of which is it uses a batch processor called MapReduce to search for and manipulate data. That tends to line jobs up in serial fashion, which is not good for iterative data processing. Plus MapReduce, for now, is not exactly easy to code, so rigging up the right algorithms for processing data can be very challenging.

This is why, really, supercomputing is still useful. Because system designers are using a modified Linux kernel as the core of the HPC system, then building apps for the platform is a much easier proposition, as is data management.

It’s not waving a magic wand… supercomputers are not exactly like putting together a Lego set. But the scalability of Linux does make it a more straightforward, albeit expensive, process to build one of these monsters.

All the better to work on problems that can truly find solutions to make the world a better place.

For more discussion on Linux and supercomputing, visit the supercomputing section of SUSE Conversations.