Is a Linux supercomputer in your future?

38
– by Chris Gulker
Linux admins, start checking out the HOWTOs and FAQs on Linux clusters. While they might not be as common as file and print servers, Linux-based supercomputers are increasingly showing up at medium and large-sized business to simulate everything from product designs to the company’s own business processes.

Large corporations have been using supercomputers for more than a decade. Global oil companies, for instance, initially used the machines to model the geology of potential drilling sites. Some companies began to use the machines to try “what-if” scenarios about their own business processes when advanced mathematical modeling applications became available. Only supercomputers could handle the dizzying number of variables that describe a large business.

Those programs cost millions to run annually, not to mention the multi-tens-of-millions of bucks that the hardware cost. A decade later, however, Linux supercomputers that can be had for tens or hundreds of thousands of dollars are already at work doing engineering models, visualizing crash tests, and performing lots of other jobs that save corporations big money.

One area where the convergence of cheap computer power and developments in mathematical modeling applications is starting to bear fruit is business process modeling, where a company builds a complex model of its own processes. Then, if it wants to see what happens if it, say, raises prices, it can get an idea of the possibilities without taking the risk of trying it out on real live customers.

Once the province of mega-billion-dollar multi-nationals, business sims are showing up at very grounded enterprises like breweries, delivery companies, and retailers, according to reports. Why are these nuts-and-bolts businesses plunking down serious cash for a Linux cluster in these tough economic times?

Linux clusters are “a kind of parallel batch processor,” says Jan Silverman, vice president of strategic initiatives at SGI. “Typically each instance of the problem runs on one node of the cluster, so you set the product price at $14 on one node, and $15 on another, and so on. So, depending on the number of nodes, you can run 100 or 1,000 sims simultaneously and look for the interesting scenarios.” Each simulation can take from minutes to days to run, so the clusters are much faster than single machines for trying out lots of different what-if scenarios.

David Alexander, who heads the High Performance Computing Center at Wichita State University, is in a particularly good place to witness how processes that were developed by scientists are now being used by business. Alexander says that physicists and chemists have been using the cluster strategy to look at large classes of objects for interesting behavior. “A quantum chemist might run a sim on 1,000 molecules, looking for the few most interesting ones, on our Linux cluster. He would then move the most interesting candidates over to a conventional supercomputer to run sims at higher resolution,” thus maximizing the efficient use of computing resources.

Linux clusters in the old days, while cheap, had enormous administration burdens. Early clusters usually came with a cart holding a monitor and keyboard, which admins would wheel from node to node to perform upgrades and maintenance — which, in the case of a 4,096-node machine, could be quite time-consuming.

Modern clusters allow everything, including OS and application upgrades, to be scripted, so that the administration burdens are within the budget reach of mid-size universities and corporations. Intelligent queuing software is also starting to have an impact, because it allows jobs to be scheduled and dispatched to the right hardware at the right time without intercession by administrators. This also means that researchers and business people don’t have to be computer scientists in order to figure out how to make their applications run efficiently.

Wichita’s Alexander says that this trend “is one of the great advances in cluster computing in the last few years. Improved ease-of-use applies both to the system administrators who have to keep the machine and software up and running, and to the end users who just want to get their work done.”

Once the realm of only the wealthiest organizations, supercomputing is becoming much less expensive, thanks to Linux. And while it may not yet be quite within the reach of the casual office worker, the power of Linux clusters is becoming steadily more accessible to power users. So, Linux admins, it may be time to start brushing up on your ‘cluster literacy’…

Chris Gulker, a Silicon Valley-based freelance technology writer, has authored more than 130 articles and columns since 1998. He shares an office with 7 computers that mostly work, an Australian Shepherd, and a small gray cat with an attitude.

Write for us – and get paid!

Category:

  • Migration