Xgrid: Grid computing for the rest of us?

234

Author: Chris Gulker

Apple’s Xgrid software, announced at Macworld earlier this month, turns a group of Macintosh computers into a supercomputer that’s “as easy as Macintosh” to manage, according to the computer maker. But even Apple’s famously easy-to-use software can’t get around one of the stickiest parts of parallel computing on commodity clusters.

Commodity clusters like Linux Beowulf clusters can achieve high performance by harnessing large numbers of inexpensive machines to compute large problems in parallel. Each machine runs a system image and has its own dedicated RAM and (usually) hard disk. The machines are connected to a network, and typically one machine is dedicated as a controller and gateway while all the others operate as clients that are managed by the controller computer.

In practice, 100Base-T or Gigabit Ethernet is used to connect machines, though other connection topologies, such as the InfiniBand interconnects used by Virginia Tech’s 1,100-Macintosh Terascale Cluster, are possible.

Clustered machines can communicate in a number of ways. Some tasks, such as digital image rendering and certain kinds of signal processing (like the SETI Institute’s famous SETI@home project), require communication only between the controller computer and the clients. Clients don’t need to talk to each other.

Other problems require that clients talk to each other, perhaps to pass on the results of computations for further processing by another node. This class of problem is sometimes very easy to chop up into individual pieces, a situation referred to as “embarrassingly parallel.”

Other classes of problem are harder to chop up. In this class of problem, “fine-grained” or “high fidelity” simulations, the computation of each node is sensitively dependent on the results generated by adjacent nodes. Each processor must wait on its neighbors for data, and there must be a system to both check that the data is valid and move it to the appropriate node. Beowulf clusters use a standard called Message Passing Interface to accomplish this.

Fine-grained problems present a host of potential difficulties. One problem is that the memory on commodity nodes is usually relatively modest, so programmers have to figure out how to chop up data into pieces that will fit in a given memory space. Since supercomputers are, by definition, used to solve large problems, data sets are often very large — in the terabyte range or larger. If the problems allow cutting data and the application into convenient pieces, the problem may run relatively efficiently. But if the data only breaks into pieces much larger than individual node RAM, computational efficiency can suffer. For this reason, commodity clusters run many problems at a small fraction of their theoretical performance.

Computational fluid dynamics, the study of flowing gases on high performance computers, has many applications in industries as diverse as oil exploration and automobile design. High-resolution CFD studies model very fine particles of the gas in question, and in each time frame the position of every particle is updated based on the positions and interactions computed in the previous frame. In this situation, passing results from one node to another can quickly become a bigger job than the computation itself, and the cluster is gated more by network I/O than CPU power.

Two ways around this problem are to use a faster network and to change architectures. Virginia Tech’s Terascale Cluster uses InfiniBand to interconnect its 1,100 dual-processor Macintosh G5s, resulting in much faster internode communication, allowing the cluster to top 10 teraflops in performance, making it the third-fastest machine in the world.

NASA Ames Research Center’s SGI Altix 3000 uses an architecture called shared memory or supercluster (as opposed to distributed memory in commodity clusters) in which a special high-speed memory interconnect and a 64-bit Linux kernel are used. The machine has a single Linux system image — its 512 nodes look to the end user like a single Linux box. Since 64-bit addressing allows every processor to address every location in memory directly, there is no message passing of the sort used by commodity clusters. Shared memory architectures tend to run fine-grained problems much faster than commodity clusters, thanks to the very low latency inherent in shared memory architecture.

Xgrid 1.0, described as a “technical preview” by Apple, is available for download from Apple’s developer site, and consists of a System Preferences panel, a Screen Saver module, and two background processes. While Xgrid will run in a local demo mode, it normally requires at least two Macs running Mac OS X 10.2.8 or later.

There are three kinds of systems in an Xgrid cluster: a controller (called the GridServer), agent computers (called GridAgents), and clients. The controller works like the controller in a Beowulf cluster: it manages communications and resources for the grid. The GridServer accepts jobs from clients, breaks them up into tasks, and assigns the tasks to agents for processing. GridAgents advertise themselves as available to the GridServer if they have had no user input for 15 minutes or their screen saver module is running.

Installation of Xgrid is straightforward using a standard Mac OS X package installer. Xgrid can also be remotely installed using SSH, and presumably, shell scripts, the common, labor-saving method preferred by most Linux cluster admins. Here at gulker.com, it took only about 20 minutes to set up a three-Mac cluster. One nice feature is that Xgrid uses Rendezvous, Apple’s version of zeroconf, to discover other Xgrid-enabled Macs on the subnet.

The Xgrid GUI is, as one would expect, easy and straightforward to use, and includes a cool-looking tachometer in the screen saver module to measure the grid’s performance.

However, as experienced Mac cluster admins point out, Xgrid mainly helps with the “embarrassingly parallel” class of computer problem. Apple does offer documentation to help OS X developers make their apps more cluster-friendly, but the basic problem of parallelizing applications and data remain.

Harnessing Mac networks is not a particularly new idea. JPL computer scientists used that facility’s large Macintosh network in the early ’90s to process jobs overnight, and numerous academic facilities turned their Mac OS 9 (and later, Mac OS X) computer labs into Appleseed clusters using a Mac version of MPI.

In the meantime, I’m looking for jobs for my home Xgrid cluster. My old home supercomputer, six mostly elderly Macs that I didn’t have the heart to recycle, briefly broke into the top 2% of SETI@home’s standings, largely based on the performance of its newest member, a G4. With Xgrid, we’re looking to make the world a better place, but only if it parallelizes well.

Chris Gulker, a Silicon Valley-based freelance technology writer, has authored more than 130 articles and columns since 1998. He shares an office with 7 computers that mostly work, an Australian Shepherd, and a