Linux is playing a larger role in high-performance computing (HPC), and consistently claiming a larger number of slots on the Top500 Supercomputer List, according to list editor Erich Strohmaier. "I would say it's the prevalent operating system on the list," he said in an interview. "There's no indication there will be a move away from Linux any time soon. I think Linux is here to stay."
Strohmaier confirmed that the list's compilers have begun tracking operating systems among the top supercomputers, four of the top five of which rely primarily on Linux clusters. The researcher said list editors were still evaluating the quality of their data and had not yet decided to include it in the Top500 site's public database, which indicates nearly 300 of the listed systems -- measured with the Linpack performance benchmark -- are clusters.
Nevertheless, the continued increase in Linux representation on the list is an obvious trend, and one that Strohmaier expects to continue, though he noted that specialized and niche applications will mean there will always be room for other operating systems as well. "Either that, or people will take the Linux kernel and strip it down," he said.
Although that is similar to what IBM has done with Linux in its top-ranked Blue Gene/L, Strohmaier indicated the standardization and common programming efforts were under way before the open source operating system was applied in HPC. "The push towards a standardized programming paradigm happened before the Linux revolution," he said.
In terms of wider availability of supercomputing with Linux, Strohmaier said it was a simple matter of cost effectiveness. He said it is easier for corporate IT clients to use Linux as an HPC platform when the open source OS matches what company servers may already be running. "That gives you more guaranteed programming interoperability across platforms," he said. "Companies are hesitant to port software to a new architecture. It's good if it's identical [to existing technology]. It makes it easier to move to HPC and for those customers."
Lightweight Linux for heavy-duty computing
Tilak Agerwala, vice president of systems at IBM Research, said that Linux plays an important role in systems such as Blue Gene and BGW, a supercomputing sister to Blue Gene located at the IBM Thomas J. Watson Research Center in New York, which won the number two spot on the latest list.
"I think Linux is starting to play a bigger role in high performance computing," he said. "There is an ecosystem building up around Linux and clusters and the programing environment of Linux and we expect more and more around that. The trend is toward a greater usage of Linux in high performance computing."
In systems such as the 90,000-plus-PowerPC processor Blue Gene, a work-in-progress on its way to 360 peak teraflops (trillion floating operations per second) when complete this year, Linux is widely used and whittled down to help assemble the what is currently the fastest computing known to man.
Agerwala said Linux is used in the Blue Gene compute nodes he described as the "workhorse" of the supersystem. "You could think of it as a version of Linux with only the features required for computing," he said. "That's a small, highly efficient, unique kernel developed by us to run on the compute node."
Agerwala said much of the rest of the functionality of the clustered, parallel processing systems is also Linux-based, with the open source operating system behind many other nodes around the main system engine. Agerwala explained that while systems such as Blue Gene and its new sister BGW represent unique architecture, IBM went to great lengths to foster the supercomputing programming benefits of Linux, which are not unique, but are more widely available and used.
"Blue Gene is a unique design and architecture," he said. "We took pains not to do something unique from a programming standpoint. It looks like a Linux cluster environment. Around this notion of clustering, there's now a programming model people expect and people experience."
Agerwala, whose company is now promoting Blue Gene computing as a solution for the broader enterprise market, said the programming environment of Linux superclusters is also providing more value in the computing power, which can now be scaled up over time, similar to Blue Gene and BGW.
"Once you have an ecosystem, you don't have to build every single tool on your own," he said. "It's just a move toward Linux and that open environment, which is also happening to some extent in the commercial world, but is happening at a more rapid pace in high performance computing."
Open source in the industry gut
SGI senior director of product marketing and management Jeff Greenwald, whose company has a solid 24 systems representing nearly 5 percent of the Top500 list, said Linux is prominent in HPC because of open standards.
"Everyone is collaborating, testing code and interfaces, and being able to run systems across multiple platforms," Greenwald said, crediting Linux acceleration with "human capital available."
While IBM and HP have indicated plans to take their HPC products to the broader business market, Greenwald said SGI, which uses Red Hat and Novell/SUSE Linux in its supercomputing systems, is more focused on "deeper, more focused" research. "We think we're positioned right in the gut of where the industry is going."
Making super the standard
Dana Gardner, principal analyst at Interarbor Solutions, said there has been a trend toward more adoption of HPC, which is differentiated more and more by different clustering and interconnect technologies, rather than operating system. "The value in HPC or supercomputing is elevated to how resources are managed, whether it's down to the blades or the processors," he said in an interview.
Gardner said that by using low-cost hardware and a low-cost platform such as Linux, companies are putting their intellectual property and value in the layers above that, between nodes and systems for HPC. He said that as more adoption drives down cost and more applications come, the efficiency and use of HPC systems will also increase.
"I still think there are a limited number of applications where the attributes of HPC are necessary," Gardner said. "But as cost and availability come down to a level on par with other enterprise or mission-critical applications, it could very well be attractive to more users. Clearly, the economics are shifting to where it should make it have wider applicability."