Differences between revisions 60 and 61
Revision 60 as of 2012-08-20 10:11:39
Size: 6687
Editor: AndreasHaupt
Comment:
Revision 61 as of 2012-09-27 12:59:41
Size: 6394
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 93: Line 93:
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/ifh.de/user/a/ahaupt/.sge/port537 -> sge_qmaster lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
-> sge_qmaster
Line 95: Line 96:
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/ifh.de/user/a/ahaupt/.sge/cert.pem
[oreade38] ~ % ls -l $HOME/.sge/key.pem 
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/ifh.de/user/a/ahaupt/.sge/key.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/
Line 99: Line 100:
Line 103: Line 103:
#$ -pe multicore-mpi 8 #$ -pe pax 8
Line 105: Line 105:
For more MPI processes that have no big communication overhead, use -pe mpi.
Line 114: Line 112:
For more demanding MPI jobs you can select one of the pax blade centers like this in your job script. You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers: You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are 8 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Building Applications

Openmpi

Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.5 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.4-gcc/bin .

Additional openmpi versions are installed to support the Intel and PGI compilers:

/usr/lib64/openmpi/1.4-icc/bin
/usr/lib64/openmpi-1.3.2-pgi/bin

If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.

Building applications

64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.

Running your application

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8c slots=8
pax8d slots=8
pax8e slots=8
pax8f slots=8

The command line would look like this:

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich2

Two additional mpi implementations are installed on all pax machines:

/usr/lib64/mvapich2/1.7-gcc/bin
/usr/lib64/mvapich2/1.7-intel/bin

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. To run it outside the batch system, follow these instructions: http://mvapich.cse.ohio-state.edu/overview/mvapich2/

Applications built with mvapich2 will only run on machines with Infiniband hardware, so they will work on the pax machines but not on desktops, workgroup servers or the farm.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax09 and pax08:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec.

If you want to use the deprecated mpd startup method as opposed to mpirun_rsh, you must also first create the file ~/.mpd.conf that contains of one line like this:

MPD_SECRETWORD=password

Batch System Access

/!\ ATTENTION: The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call ini pax.

Alternatively source a script:

  • zsh users:
    [oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
  • tcsh users:
    [oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
    Switching back to use the standard farm works similarly:
  • zsh users:
    [oreade38] ~ % . /usr/gridengine/default/common/settings.sh
  • tcsh users:
    [oreade38] ~ $ source /usr/gridengine/default/common/settings.csh

Please make sure that your Gridengine certificates are in place:

[oreade38] ~ % ls -l $HOME/.sge/port537
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
 -> sge_qmaster
[oreade38] ~ % ls -l $HOME/.sge/cert.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/

A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node:

#$ -pe pax 8

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp

The MPI runtime will automatically select the right network type.

You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

#$ -pe pax 128

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp

Finally, here's a list of common pitfalls when using the pax batch system:

  • Please be aware that all requested resources (via the -l qsub switch) are meant per job slot. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3500 MB h_vmem in your job scripts. Otherwise your job won't start! Please make sure your MPI processes don't use more than 3GB per slot, the memory overcommittment should be used for mpirun overhead for large jobs (>=512 slots) only.

  • /!\ If your MPI application relies on LD_LIBRARY_PATH to load its shared libraries or modules, this will fail on remote notes, as the batch system will remove this variable from the environment. In that case you'll have to wrap yourapp in a shell script that sets up the environment and calls your binary application.

Mvapich2

With mvapich2 1.7, there is working integration into the SGE batch system. Just use a command like this:

#$ -pe pax 128

/usr/lib64/mvapich2/1.7-intel/bin/mpiexec -n $NSLOTS yourapp

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

BLAS library

Both ATLAS and GotoBLAS are available.

  • ATLAS is in /opt/products/atlas
  • libgoto is in /usr/lib or /usr/lib64 respectively.

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)