Home Linux Community Community Blogs

Community Blogs

Building a Beowulf Cluster in just 13 steps

What are Clusters

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. Clusters are generally connected by a fast Local Area Network. Parallel programs that run on one of the nodes uses the processing power of all the nodes and produces the result. Generally clusters are tightly coupled ie. All the motherboards will be stacked into a single cabinet and connected using some interconnection network. They'll share RAM, Hard Disk and other peripherals. Operating System runs on one of the nodes and controls the activities of other nodes. For more on Clusters, refer the Wiki Page.

What is a Beowulf Cluster

Beowulf Clusters are cheap clusters created using off the shelf components. You can create a Beowulf cluster with just a few crap computers and an ethernet segment in your backyard. Although they don't give you top-notch performance, their performance is many-fold better than a single computer. A variant of Beowulf Clusters allows OS to run on every node and still allow parallel processing. And this is what exactly we're going to do here.

Kick Start Your Cluster
  1. Atleast Two Computers with a Linux Distribution installed in it(I'll use Ubuntu 8.04 here).  Make sure that your system has GCC installed in it.
  2. A network connection between them. If you have just two computers, you can connect them using an ethernet wire. Make sure that IP addresses are assigned to them. If you dont have a router to assign IP, you can statically assign them IP addresses. Click Here to know how to assign static IP addresses.
  3. Eagerness to learn(I'm sure you have it!!!)

Rest of the document will assume that we are having two computers having host names node0 and node1. Let node0 be the master node.

  1. The following steps are to be done for every node
  2. Add the nodes to the /etc/hosts file. Open this file using your favourite text editor and add your node's IP address followed by its host name. Give one node information per line. For example,

  3. Create a new user in both the nodes. Let us call this new user as mpiuser. You can create a new user through GUI by going to System->Administration->Users and Groups and click "Add User". Create a new user called mpiuser and give it a password. Give administrative privileges to that user. Make sure that you create the same user on all nodes. Although same password on all the nodes is not necessary, it is recommended that you do so because it'll eliminate the need to remember passwords for every node.
  4. Now download and install ssh-server in every node. Execute the command sudo apt­-get install openssh­server in every machine.
  5. Now logout from your session and log in as mpiuser.
  6. Open terminal and type the following ssh-keygen -t dsa. This command will generate a new ssh key. On executing this command, it'll ask for a paraphrase. Leave it blank as we want to create a passwordless ssh (Assuming that you've a trusted LAN with no security issues).
  7. A folder called .ssh will be created in your home directory. Its a hidden folder. This folder will contain a file that contains your public key. Now copy this key to another file called authorized_keys in the same directory. Execute the command in the terminal cd /home/mpiuser/.ssh; cat >> authorized_keys;.
  8. Now download MPICH from the following website(MPICH1). Please download the MPICH 1.xx version from the website. Do not download MPICH 2 version. I was unable to get MPICH 2 to work in the cluster.
  9. Untar the archive and navigate into the directory in the terminal. Execute the following commands:
    mkdir /home/mpiuser/mpich1
    ./configure --prefix=/home/mpiuser/mpich1
    make install
  10. Open the file .bashrc in your home directory. If file does not exist, create one. Copy the following code into that file
    export PATH=/home/mpiuser/mpich1/bin:$PATH
    export PATH
    export LD_LIBRARY_PATH
  11. Now we'll define the path to MPICH for SSH. Run the following command: sudo echo /home/mpiuser/mpich1/bin >> /etc/environment
  12. Now logout and login back into the user mpiuser.
  13. In the folder mpich1, within the sub-directory share or util/machines/ a file called machines.LINUX will be found. Open that file and add the hostnames of all nodes except the home node ie. If you're editing the machines.LINUX file of node0, then that file will contain host names of all nodes except node0. By default MPICH executes a copy of the program in the home node. The machines.LINUX file for the machine node0 is as follows

    node1 : 2

    The number after : indicates number of cores available in each of the nodes.
Cluster is Ready!!

Your cluster is ready!!! You can test run your programs by compiling the code available in the examples directory within mpich1 directory. Since example files have a MakeFile associated with them, you can compile the code by simply typing make command in the terminal after navigating to the corresponding directory.

To execute your code, make sure that the executable is at the same path in all nodes ie. If "foo" is your executable present in the path /home/mpiuser/mpich1/example/foo in node0, then that executable must be present in the same path in all other nodes.

To execute the code foo, type the following command in terminal after navigating to the location of the executable: mpirun -np 2 foo. Its enough to run the command in any one of the nodes, but make sure that the executable file is present in the same path in all the nodes. Here mpirun is the command that will run our program in all the nodes specified in the machines.LINUX file. "-np 2" flag indicates the number of processes to be spawned. Here we spawn two processes. By default one process will be spawned in the home node. Since here we used "-np 2", two processes will be spawned, one in the host machine and other in the node listed in the machines.LINUX file. If the machines.LINUX file has 10 nodes listed and "-np 2" flag is used, only the node represented by the first entry in the file is attached to the cluster.

Fallacies and Pitfalls
  • Usually RSH is used in place of SSH. But since SSH is so easily configurable, we stick with SSH. Moreover SSH is secure than RSH
  • Do not use MPICH2. I was unable to get MPICH2 to work properly
  • Make sure that your executable is there in same location in all the nodes

The whole process can be further simplified, if we could set up a Network File System and mount that directory in all the nodes. Thus changes made in the directory in one node will reflect to all the other nodes. Instructions on how to get this working are available in the following references.



This tutorial can be expanded to add more features such as NFS etc. Please refer to the following links for comprehensive tutorials.

- MPICH Ubuntu Cluster
- Using MPICH to build a small beowulf cluster
The whole document is a verbatim reproduction of my own earlier article at -

Let's see, whats going on....

This is obviousely my first article.

Restoring Ubuntu Settings from Old Installation

If you are a hard-core Ubuntu fan and keep customizing ubuntu, you're sure be a victim of the loss of customization after a fresh install. You can always recustomize it but its boring and irritating. To solve this problem, you can use YourGnome, a software that can backup your Gnome settings and restore them in ONE SINGLE CLICK. YourGnome is actually a shell script that does all the magic. Its a fantastic project and you can find its home page here -

Firefox and Pidgin's settings are yet another important softwares which one would want to have their settings restored. This is again very simple.

Firefox Restore:

To restore firefox settings, just copy the .mozilla folder in home folder of your old installation to the home folder of your new installation. Just replace the existing .mozilla folder. This will do all good and no harm :D

Pidgin Restore:

Pidgin configuration restoration is also as simple as for Firefox. Just copy the .purple folder from old install's home folder to the new installation's home folder.


SSH to server without password using RSA key

I came across a requirement for automatically logging into the server without entering password, This can done using the RSA

Simple Way (Better to try this)

1.Run the following command on the client (from where you want to access the
#ssh-keygen -t rsa

2.id_rsa and files will be created inside $HOME/.ssh

3.Copy to the server's .ssh directory

#mkdir $HOME/.ssh
#scp $HOME/.ssh/ user@server:/home/user/.ssh

4.Change to /root/.ssh and create file authorized_keys containing id_rsa content
#cd /home/user/.ssh
#cat id_rsa >> authorized_keys

5.You can try ssh to the server from the client and no password will be needed
#ssh user@server

6.enable rsa authentication in /etc/ssh/sshd_config in both the servers
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

7.Restart sshd service ( service sshd restart)

A much more complex way

In the server where you want to give access
#ssh-keygen -t rsa

Give password

This will create two files Private key and public key in $HOME/.ssh/ and $HOME/.ssh/id_rsa

#scp /root/.ssh/ This e-mail address is being protected from spambots. You need JavaScript enabled to view it .*.*:/home/test/.ssh/
# scp /root/.ssh/id_rsa This e-mail address is being protected from spambots. You need JavaScript enabled to view it .*.*:/home/test/.ssh/
#exec ssh-agent bash
#ssh-add /root/.ssh/id_rsa

Remote Side

Create a user
#Useradd test
#Passwd test
#su - test

$mkdir /home/test/.ssh
$chmod 700 .ssh
$cat /home/test/.ssh/id_rsa >> /home/test/.ssh/ authorized_keys (if ssh2 then use authorized_keys2)

$ exec ssh-agent bash
$ssh-add /root/.ssh/id_rsa


Configuring iSCSI initiator in Red Hat Enterprise Server 4

Installation on Linux
# rpm -ivh iscsi-initiator-utils-
IQN no of Linux Server (/etc/initiatorname.iscsi file)
Each iSCSI device on the network, be it initiator or target, has a unique iSCSI node name. Red Hat uses the iSCSI Qualified Name (IQN) format with the initiator that ships with Red Hat Enterprise Linux. In the IQN format, a node name consists of a predefined section, chosen based on the initiator manufacturer, and a unique device name section which is editable by the administrator.
Provide this IQN number to your IPSAN Administrator he will create and assign LUN to this IQN
Configuration ( /etc/iscsi.conf)


To globally configure a CHAP username and password for initiator
authentication by the target(s), uncomment the following lines:

Outgoingusername is something we create at Target to authenticate the LUN assigned to this


To globally configure a CHAP username and password for target(s)
authentication by the initiator, uncomment the following lines:


Settings in config file ( /etc/iscsi.conf)

DiscoveryAddress=ipaddress or hostname of your IPSAN
OutgoingUsername=username created in targetserver for accssing this LUN
OutgoingPassword= password created in targetserver for accssing this LUN


Embedded Linux resource:

The new site looks great!

If you're an Embedded Linux Developer the place to find all the resources you need to get started on a project, improve a current project, or research embedded  technologies and trends is  As a wiki with over 1500 Users you're sure to find what you need.  We're also on IRC on the freenode network, channel #elinux.  Please be patient when posting questions to IRC.




Ubuntu 9.04 Bug with ATI Radeon - Solution

The newly released Ubuntu 9.04 is fantastic in its functionality. But there are some bugs with the new on ATI Radeon graphic cards. Because of this bug, any application refreshes very slowly. If you scroll in firefox, you'll experience very slow and laggy scrolling. To rectify this bug, you have to add the following line in the devices section in the xorg.conf file.

  Option "MigrationHeuristic" "greedy"

This will rectify all the problems and now Juanty will start working at its best.

PS: I thank the author of this page for providing the solution to the Xorg problem



Zimbra - Mail server for those tired of MS tread mill

  1. Nice features (Free version and Paid version - basically support and active sync for mobile)
  2. Non-Windows
  3. AJAX web client is amazing
  4. Zimbra Desktop is very functional and has cool factor (CFO's will love it because it is Free) 
  5. Runs on Red Hat, Ubuntu, SUSE and (I am forgetting one be right back)
  6. more to come just trying out blog features of

$:Greetings programs-

Greetings programs,

 I have to say that I am really enjoying the new site.  This was a great idea putting all of this together in one place.  Hopefully something useful can come out of the exchange of ideas and view points.  As always I hope that this new found "Library of Alexandria" for all things Linux will be used for positives.  My suggestion to the community as a whole is leave the baggage at the door.  We are all open source users here, not one distro or another.  Leave the holy wars somewhere else and make this hive of openiness work.  This is a great site and I hope that it really turns into something special.


Windows 7? Thanks I'll keep my ubuntu

After installation it took only 2 hours for me to delete it.  Yes, talking about windows 7 rc. Even my 3 year old son asked why the computer was so loud.


–°–µ–≥–æ–¥–Ω—è —Ä–∞–∑–±–∏—Ä–∞–ª—Å—è —Å CAP'–æ–º, –±–ª–∏–Ω, –æ—Ç —ç—Ç–æ–π —Ñ–∏–≥–Ω–∏ —Ç–∞–∫ –º–æ–∑–≥ —Ä–∞–∫–æ–º –≤—Å—Ç–∞—ë—Ç, –ø—Ä–æ—Å—Ç–æ —É–∂–∞—Å.
Page 140 of 146

Upcoming Linux Foundation Courses

  1. LFD211 Introduction to Linux for Developers
    08 Dec » 09 Dec - Virtual
  2. LFS220 Linux System Administration
    08 Dec » 11 Dec - Virtual
  3. LFD331 Developing Linux Device Drivers
    15 Dec » 19 Dec - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board