Revision 69 as of 2013-03-18 14:17:44

Clear message

Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are 10 dedicated parallel clusters (blade centers) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Hardware

The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 16 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8f to build and test your programs. Please don't use these machines to run long production code, use the batch system instead.

The batch part consists of 8 blade centers with 16 nodes each, connected via a QDR infiniband network.

Then there's also one separate blade center that isn't connected to the others but can run self-contained parallel jobs.

Building Applications

On SL5, you can use 'ini' to add a MPI runtime to the path. On SL6, use the 'module' command.

Openmpi

Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed.

SL5

SL5 ships with openmpi 1.4 in 32 bit and 64 bit versions. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.4-gcc/bin .

Additional openmpi versions are installed to support the Intel and PGI compilers:

/usr/lib64/openmpi/1.4-icc/bin
/usr/lib64/openmpi-1.3.2-pgi/bin

If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.

SL6

SL6 ships with openmpi 1.5.4. The paths to the runtime have changed from SL5. Also, you must rebuild your application for the new ABI. The paths to the openmpi versions are:

/usr/lib/openmpi/bin
/usr/lib64/openmpi/bin
/usr/lib64/openmpi-intel/bin

Instead of 'ini', please use the 'module' command to add a MPI compiler to your path, e.g.  module add openmpi-x86_64-intel .

Building applications

64 bit MPI Applications for SL5 can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de. For the SL6 machines, build on any SL6.3 workgroup server, e.g. the pax8 machines.

Running your application interactively

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8c slots=8
pax8d slots=8
pax8e slots=8
pax8f slots=8

The command line would look like this:

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich2

Two additional mpi implementations are installed on all pax machines, one GCC and one Intel compiler version.

SL5

The paths to the binaries on SL5 are:

/usr/lib64/mvapich2/1.7-gcc/bin
/usr/lib64/mvapich2/1.7-intel/bin

SL6

The paths have changed to

/usr/lib64/mvapich2/bin
/usr/lib64/mvapich2-intel/bin

Building and running programs interactively

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. To run it outside the batch system, follow these instructions: http://mvapich.cse.ohio-state.edu/overview/mvapich2/

Applications built with mvapich2 will only run on machines with Infiniband hardware, so they will work on the pax machines but not on desktops, workgroup servers or the farm.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax09 and pax08:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec.

Batch System Access

/!\ ATTENTION: The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call ini pax.

Alternatively source a script:

Please make sure that your Gridengine certificates are in place:

[oreade38] ~ % ls -l $HOME/.sge/port537
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
 -> sge_qmaster
[oreade38] ~ % ls -l $HOME/.sge/cert.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/

A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node:

#$ -pe pax 8

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit on SL5, use

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp

The MPI runtime will automatically select the right network type.

You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

#$ -pe pax 128

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp

Finally, here's a list of common pitfalls when using the pax batch system:

Mvapich2

With mvapich2 1.7, there is working integration into the SGE batch system. Just use a command like this (on SL5):

#$ -pe pax 128

/usr/lib64/mvapich2/1.7-intel/bin/mpiexec -n $NSLOTS yourapp

SL6 changes

A part of the pax cluster has been migrated to SL6. To run a job on these machines, you must specify -l os=sl6 for now. As the versions and paths of the MPI implementations have changed, programs are not compatible between SL5 and SL6, you must rebuild your application on SL6.

The 'ini' command is no longer in use for selecting MPI versions, it was replaced by the very similar 'module'. The command 'module avail' lists the installed modules. To load Open-MPI for the Intel compiler, use the command 'module add openmpi-x86_64-intel'.

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

BLAS library

Both ATLAS and GotoBLAS are available.

Monitoring

Ganglia provides a web monitoring interface. These pages are only available from the internal network.

interactive machines parallel batch machines

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar