SGI announces low-cost Linux supercluster machine

12

Author: Chris Gulker

SGI today announced a new, low-cost Itanium 2-based Linux server, the Altix 350. The new $12,199 machine will dramatically reduce the cost and complexity of shared-memory superclusters — supercomputers that run a single Linux system image spanning many processors.

An advantage of the Altix architecture is the single Linux system image — typically a stock 2.4.x kernel with open source high-performance computing patches applied. Applications can be installed and run as if on an ordinary Linux workstation.

SGI is targeting what it says is a $2.6 billion market for midrange servers for scientists, design engineers, researchers, and other technical computing users, a segment that is currently dominated by mid-range machines from Sun, IBM, and HP running proprietary versions of Unix.

The Altix 350 borrows its design from SGI’s high-end Altix 3000 series, a high performance computing architecture used for tasks like planetary weather modelling. Since its January 2003 introduction, SGI says it has shipped 10,000 processors to 150 Altix 3000 customers, including a 512-processor machine used by NASA Ames Research Center.

For many applications to run efficiently on commodity clusters, special work must be done to parallelize the applications and data sets to fit the resources of individual computing nodes, each of which runs its own system image. Scientists with domain knowledge in a special field, like physics or aeronautics, typically require assistance from computer scientists to break up their applications and data sets to run on commodity clusters.

The Altix avoids this problem by sharing all of a system’s memory using a proprietary SGI interconnect technology called NUMAlink. Essentially, every processor sees the entire memory space and can access it directly over a very low latency interconnect fabric that is three orders of magnitude faster than typical commodity cluster interconnects. For very fine-grained computing problems, like weather modelling, in which each processor must consider the results of its neighbors before proceeding, the overhead for communication between nodes can be much more resource-intensive than the computation.

One Altix customer, InCyte Genomics, reduced the time required to search 20 very large databases from more than six weeks on a commodity cluster to one week on an Altix 3000 system, in part because they could load all 20 databases into shared memory. The Altix 350 will allow users with more constrained budgets to see similar speedups vs. the midrange computers they now use, according to SGI. SGI says the Altix, which starts at $12,199 for a base two-processor system, provides more than double the price/performance ratio of competing machines.

The Altix 350 uses a modular concept similar to the Altix 3000, and allows users to add processors, memory, and other resources as needs require and budgets allow, without having to completely re-configure the array. Altix 350 will scale to 16 processors and 192 GB shared memory, while Altix 3000 machines can scale to 128 processors and 8 terabytes of shared memory.

SGI Altix product line manager Andy Fenselau noted that Linux support was very good, thanks both to SGI’s efforts and the “back-back-end” of the Linux open source community.

“We repackaged and squeezed every penny out of an Altix 3000 to make the Altix 350, which comes ‘fully-loaded’ at a cost of $5,400 per processor,” Fenselau said. “We think these machines will be appealing to markets besides scientific and technical computing. These machines are capable of 7 gigabyte-per-second memory I/O, versus the 500 megabytes-per-second I/O that is typical of a commodity cluster machine. This is a killer database box.”

Fenselau also thinks that the single Linux image is much friendlier for end users in typical environments, where resource requirements vary greatly from job to job and computer scientists may not be available to help re-engineer applications to run on commodity clusters. Fenselau noted that there are more than 100 open-source high-performance computing applications for Linux. He also believes Altix 350 clusters will be used by scientists to screen data for interesting cases which will then be run on Altix 3000 clusters.

“There is clearly a need for systems in the middle range with better scaling characteristics than PC clusters yet costing much less per processor than supercomputing systems,” said Bob Ciotti, tera-scale applications lead at NASA Ames Research Center. “Such a system would be useful for exploring a large number of modeling scenarios at a lower level of fidelity, allowing one to determine specific points of interest. With the right price/performance combination, a system of that character would help us make more efficient use of all our computing resources.”

Chris Gulker, a Silicon Valley-based freelance technology writer, has authored more than 130 articles and columns since 1998. He shares an office with 7 computers that mostly work, an Australian Shepherd, and a small gray cat with an attitude.

Category:

  • Software