Differences between revisions 1 and 50 (spanning 49 versions)
Revision 1 as of 2006-02-20 15:33:34
Size: 1710
Editor: GötzWaschk
Comment:
Revision 50 as of 2010-11-25 16:07:22
Size: 5207
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
There are 8 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there.
Line 3: Line 4:
At Zeuthen a cluster of 16 dual opteron machines is available. It is integrated into the SGE batch system. The documentation in ["Batch System Usage"] applies to it. For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''
Line 6: Line 7:
=== Openmpi ===
Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.5 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.4-gcc/bin .
Line 7: Line 10:
Applications for the cluster must be compiled on a 64 bit machine, at the moment, this means either lx64 or linfini. There are MPI versions for the GCC, Intel and PGI compilers installed: Additional openmpi versions are installed to support the Intel and PGI compilers:
Line 9: Line 12:
/usr/local/ibgd/mpi/osu/gcc/mvapich-0.9.5/bin/mpicc {{{
/usr/lib64/openmpi/1.4-icc/bin
/usr/lib64/openmpi-1.3.2-pgi/bin
}}}
If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.
Line 11: Line 18:
/usr/local/ibgd/mpi/osu/intel/mvapich-0.9.5/bin/mpicc ==== Building applications ====
64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.
Line 13: Line 21:
/usr/local/ibgd/mpi/osu/pgi/mvapich-0.9.5/bin/mpicc ==== Running your application ====
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
Line 15: Line 24:
{{{
pax0c slots=8
pax0d slots=8
pax0e slots=8
pax0f slots=8
}}}
The command line would look like this:
Line 16: Line 32:
Compilers for C++ and Fortran are available as well. {{{
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile ./program
}}}
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/
Line 18: Line 37:
=== Mvapich / Mvapich2 ===
Three additional mpi implementations are installed on all pax machines:
Line 19: Line 40:
{{{
/usr/lib64/mvapich/1.2.0-gcc/bin
/usr/lib64/mvapich2/1.4-gcc/bin
/usr/lib64/mvapich2/1.4-intel/bin
}}}
To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here: http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax19 and pax18:

{{{
pax08
pax09
pax08
pax09
}}}
You must also first create the file ~/.mpd.conf that contains of one line like this:

{{{
MPD_SECRETWORD=password
}}}
Line 20: Line 61:
/!\ '''ATTENTION''': The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call `ini pax`.
Line 21: Line 63:
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this: Alternatively source a script:
Line 23: Line 65:
#$ -pe mpich-ppn2 4  * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
}}}
 Switching back to use the standard farm works similarly:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/default/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/default/common/settings.csh
}}}
Line 25: Line 83:
It is important to request the right limit for memory with the parameter h_vmem. The machines have 3673204k of RAM and by default two jobs are executed on one node, so the maximal amount of memory is 1650M per process. A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node:

{{{
#$ -pe multicore-mpi 8
}}}
For more MPI processes that have no big communication overhead, use -pe mpi.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use

{{{
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp
}}}
The MPI runtime will automatically select the right network type.

For more demanding MPI jobs you can select one of the pax blade centers like this in your job script. You can request up to 128 slots, as a blade center contains 128 CPU cores:

{{{
#$ -pe pax? 128

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp
}}}
Currently, only openmpi is supported in the batch system, mvapich2 does not work /!\

Finally, here's a list of common pitfalls when using the pax batch system:

 * Please be aware that all requested resources (via the '''-l''' qsub switch) are meant '''per job slot'''. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3GB h_vmem in your job scripts. Otherwise your job won't start!
Line 28: Line 111:
Line 31: Line 113:
Be aware that the batch system renews the AFS token, but only on the node that starts the first process (node 0). That's why you should access the AFS from that node. An example scenario looks like this: == BLAS library ==
Both ATLAS and GotoBLAS are available.
Line 33: Line 116:
 1. Copy data from AFS to node 0.
 1. Copy it with scp to the nodes that need it to the directory $TMPDIR, the machine names are in $TMPDIR/machines
 1. Run your MPI job.
 1. Copy the results with scp from the local discs to node 0.
 1. Copy the data from node 0 to AFS.
 * ATLAS is in /opt/products/atlas

 * libgoto is in /usr/lib or /usr/lib64 respectively.

== Further documentation ==
[[http://www-zeuthen.desy.de/technisches_seminar/texte/waschk_20100427.pdf|Paralleles Rechnen in Zeuthen - die neuen Cluster]] , 04/27/10, technical seminar

[[http://www-zeuthen.desy.de/technisches_seminar/texte/Technisches_Seminar_Waschk.pdf|HPC-Clusters at DESY Zeuthen]] , 11/22/06, technical seminar

Usage of the Linux Clusters at DESY Zeuthen

There are 8 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Building Applications

Openmpi

Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.5 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.4-gcc/bin .

Additional openmpi versions are installed to support the Intel and PGI compilers:

/usr/lib64/openmpi/1.4-icc/bin
/usr/lib64/openmpi-1.3.2-pgi/bin

If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.

Building applications

64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.

Running your application

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax0c slots=8
pax0d slots=8
pax0e slots=8
pax0f slots=8

The command line would look like this:

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich / Mvapich2

Three additional mpi implementations are installed on all pax machines:

/usr/lib64/mvapich/1.2.0-gcc/bin
/usr/lib64/mvapich2/1.4-gcc/bin
/usr/lib64/mvapich2/1.4-intel/bin

To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here: http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax19 and pax18:

pax08
pax09
pax08
pax09

You must also first create the file ~/.mpd.conf that contains of one line like this:

MPD_SECRETWORD=password

Batch System Access

/!\ ATTENTION: The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call ini pax.

Alternatively source a script:

  • zsh users:
    [oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
  • tcsh users:
    [oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
    Switching back to use the standard farm works similarly:
  • zsh users:
    [oreade38] ~ % . /usr/gridengine/default/common/settings.sh
  • tcsh users:
    [oreade38] ~ $ source /usr/gridengine/default/common/settings.csh

A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node:

#$ -pe multicore-mpi 8

For more MPI processes that have no big communication overhead, use -pe mpi.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp

The MPI runtime will automatically select the right network type.

For more demanding MPI jobs you can select one of the pax blade centers like this in your job script. You can request up to 128 slots, as a blade center contains 128 CPU cores:

#$ -pe pax? 128

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp

Currently, only openmpi is supported in the batch system, mvapich2 does not work /!\

Finally, here's a list of common pitfalls when using the pax batch system:

  • Please be aware that all requested resources (via the -l qsub switch) are meant per job slot. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3GB h_vmem in your job scripts. Otherwise your job won't start!

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

BLAS library

Both ATLAS and GotoBLAS are available.

  • ATLAS is in /opt/products/atlas
  • libgoto is in /usr/lib or /usr/lib64 respectively.

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)