Differences between revisions 24 and 25
Revision 24 as of 2009-12-07 10:55:19
Size: 2004
Editor: GötzWaschk
Comment: Beispielkommandozeile
Revision 25 as of 2009-12-09 17:03:49
Size: 2521
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
There are no dedicated parallel clusters available at the moment, but you can run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there. There are 4 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there.
Line 6: Line 6:

=== Openmpi ===
Line 17: Line 19:
=== Building applications === ==== Building applications ====
Line 20: Line 22:
=== Mvapich / Mvapich2 ===
Two additional mpi implementations are installed on all pax machines:
{{{
/usr/lib64/mvapich/1.1.0-gcc/bin
/usr/lib64/mvapich2/1.2-gcc/bin
}}}
To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here:
http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2
Line 33: Line 43:
The mca option is needed to disable the udapl btl plugin that currently does not work.

Usage of the Linux Clusters at DESY Zeuthen

There are 4 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.

Building Applications

Openmpi

Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.4 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.3.2-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.3.2-gcc/bin .

Additional openmpi versions are installed to support the Intel and PGI compilers:

/usr/lib64/openmpi-1.3.2-intel/bin
/usr/lib64/openmpi-1.3.2-pgi/bin

If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.

Building applications

64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.

Mvapich / Mvapich2

Two additional mpi implementations are installed on all pax machines:

/usr/lib64/mvapich/1.1.0-gcc/bin
/usr/lib64/mvapich2/1.2-gcc/bin

To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here: http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2

Batch System Access

A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots on a single node:

#$ -pe multicore-mpi 8

For more MPI processes, use -pe mpi.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use

/usr/lib64/openmpi/1.3.2-gcc/bin/mpirun --mca btl "^udapl" -np $NSLOTS yourapp

The mca option is needed to disable the udapl btl plugin that currently does not work.

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

BLAS library

Both ATLAS ans GotoBLAS are available.

  • ATLAS is in /opt/products/atlas
  • libgoto is in /usr/lib or /usr/lib64 respectively.

Further documentation

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)