Revision 37 as of 2010-02-04 14:58:25

Clear message

Usage of the Linux Clusters at DESY Zeuthen

There are 4 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.

Building Applications

Openmpi

Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.4 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.3.2-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.3.2-gcc/bin .

Additional openmpi versions are installed to support the Intel and PGI compilers:

/usr/lib64/openmpi-1.3.2-intel/bin
/usr/lib64/openmpi-1.3.2-pgi/bin

If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.

Building applications

64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.

Running your application

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax0c slots=8
pax0d slots=8
pax0e slots=8
pax0f slots=8

The command line would look like this:

/usr/lib64/openmpi/1.3.2-gcc/bin/mpirun -np 32 -machinefile ./machinefile --mca btl "^udapl" ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich / Mvapich2

Three additional mpi implementations are installed on all pax machines:

/usr/lib64/mvapich/1.1.0-gcc/bin
/usr/lib64/mvapich2/1.2-gcc/bin
/usr/lib64/mvapich2/1.2-intel/bin

To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here: http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax19 and pax18:

pax18
pax19
pax18
pax19

Batch System Access

A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node:

#$ -pe multicore-mpi 8

For more MPI processes that have no big communication overhead, use -pe mpi.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use

/usr/lib64/openmpi/1.3.2-gcc/bin/mpirun --mca btl "^udapl" -np $NSLOTS yourapp

The mca option is needed to disable the udapl btl plugin that currently does not work.

For more demanding MPI jobs you can select one of the pax blade centers like this in your job script. You can request up to 128 slots, as a blade center contains 128 CPU cores:

#$ -pe pax? 128

/usr/lib64/openmpi/1.3.2-gcc/bin/mpirun --mca btl "^udapl" -np $NSLOTS yourapp

If you want to use mvapich2 instead of openmpi from a batch job, you must first create the file ~/.mpd.conf that contains of one line like this:

MPD_SECRETWORD=password

Then use this in your job script:

#$ -pe pax?-mvapich2 16
export MPD_CON_EXT="sge_$JOB_ID.$SGE_TASK_ID"
/usr/lib64/mvapich2/1.2-gcc/bin/mpiexec -n $NSLOTS your_program

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

BLAS library

Both ATLAS ans GotoBLAS are available.

Further documentation

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar