Just like the original guide, I decided I was going to preserve it here as well.
May 24, 2011
This guide is an evolution from this original guide. Unless the Kerrighed Team comes up with a substantially different version, this is the only update to this guide I will ever make as the steps are pretty much the same for all svn versions I have tested.
On this version:
- Added changes for the latest Kerrighed svn 5586
- Fixed some steps to make them more readable and error free.
- Added simple MPI example to see how your program interacts with the cluster.
- Added troubleshooting section for some situations in which the nodes do not receive the image from the controller.
Thank you all for your previous comments and emails.
———— | internet | ———— | router1 | v +————————————————–+ | eth1 — controller: 192.168.1.106 (given by router1)| | eth0 — controller: 10.11.12.1 (manually set) | +————————————————–+ | router2 | | | + –>eth0–node1: 10.11.12.101 (static IP Address) | v eth0–node2: 10.11.12.102 (static IP Address)
Debian Lenny with default kernel 2.6.26-2-686
All steps done as root on the controller
- dhcp server will provide ip addresses to the nodes.
- tftpd-hpa will deliver the image to the nodes
- portmap converts RPC (Remote Procedure Call) program numbers into port numbers.
NFS uses that to make RPC calls.
- syslinux is a boot loader for Linux which simplifies first-time installs
- nfs will be used to export directory structures to the nodes
When installing these packages accept the default settings presented for dhcp3 and TFTP.
These packages are for MPI (see under TESTING below). You can install them on the controller to compile your MPI programs, then move them to any of the nodes and start the program from the node; or you can create, compile, and execute your MPI programs on any of the nodes. Either way, you need these packages on the node to execute your MPI code no matter option you choose:
ssh-add /home/clusteruser/.ssh/id_dsa (type in password associated with keys)
Step 35 TESTING:
A simple ‘hello world’ programs that calls the MPI library.
I will create a config file where MPI can lookup information for running jobs on the cluster.
I am creating this config file on the home directory of the cluster user “clusteruser” –which is the same account we created earlier. It will be readable to the node so you can create the file as your own user from the controller. You can also log on to the any of the nodes where you will be triggering your programs from and create the file there using the “clusteruser” account:
In this situation, I opted for Door A
at controller as a regular user –your regular system username: