Differences between revisions 1 and 78 (spanning 77 versions)
Revision 1 as of 2006-02-20 15:33:34
Size: 1710
Editor: GötzWaschk
Comment:
Revision 78 as of 2015-02-16 11:56:23
Size: 8099
Editor: GötzWaschk
Comment: pax8f
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
<<TableOfContents>>
Line 3: Line 4:
At Zeuthen a cluster of 16 dual opteron machines is available. It is integrated into the SGE batch system. The documentation in ["Batch System Usage"] applies to it. == Introduction ==
There are 9 dedicated parallel clusters (blade centers) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''

== Hardware ==
The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 15 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8e to build and test your programs. Please don't use these machines to run long production code, use the batch system instead.

The batch part consists of 8 blade centers with 16 nodes each, connected via a QDR infiniband network.
Line 6: Line 15:
All cluster machines have been migrated to SL6. On SL6, use the 'module' command to add one of the MPI implementations to your path.
Line 7: Line 17:
Applications for the cluster must be compiled on a 64 bit machine, at the moment, this means either lx64 or linfini. There are MPI versions for the GCC, Intel and PGI compilers installed: === Openmpi ===
SL6.6 ships with openmpi 1.8.1. The paths to the runtime have changed from SL5. Also, you must rebuild your application for the new ABI. The paths to the openmpi versions are:
Line 9: Line 20:
/usr/local/ibgd/mpi/osu/gcc/mvapich-0.9.5/bin/mpicc {{{
/usr/lib/openmpi/bin
/usr/lib64/openmpi/bin
/usr/lib64/openmpi-intel/bin
}}}
Instead of 'ini', please use the 'module' command to add a MPI compiler to your path, e.g. {{{ module add openmpi-x86_64-intel}}} .
Line 11: Line 27:
/usr/local/ibgd/mpi/osu/intel/mvapich-0.9.5/bin/mpicc ==== Building applications ====
Build your application on any SL6 workgroup server, e.g. the pax8 machines pax80 to pax8e or the machine sl6.
Line 13: Line 30:
/usr/local/ibgd/mpi/osu/pgi/mvapich-0.9.5/bin/mpicc ==== Running your application interactively on pax8 ====
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
Line 15: Line 33:
{{{
pax8b slots=8
pax8c slots=8
pax8d slots=8
pax8e slots=8
}}}
The command line would look like this:
Line 16: Line 41:
Compilers for C++ and Fortran are available as well. {{{
/usr/lib64/openmpi/bin/mpirun -np 32 -machinefile ./machinefile ./program
}}}
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/
Line 18: Line 46:
=== Mvapich2 ===
Two additional mpi implementations are installed on all pax machines, one GCC and one Intel compiler version.

The paths have changed from SL5 to

{{{
/usr/lib64/mvapich2/bin
/usr/lib64/mvapich2-intel/bin
}}}
==== Building and running programs interactively ====
To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. To run it outside the batch system, follow these instructions: http://mvapich.cse.ohio-state.edu/overview/mvapich2/

Applications built with mvapich2 will only run on machines with Infiniband hardware, so they will work on the pax machines but not on desktops, workgroup servers or the farm.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

{{{
pax88
pax89
pax88
pax89
}}}
The preferred way to run a application with mvapich2 is mpiexec.
Line 20: Line 71:
/!\ '''ATTENTION''': The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call `ini pax`.
Line 21: Line 73:
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this: Alternatively source a script:
Line 23: Line 75:
#$ -pe mpich-ppn2 4  * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
}}}
 Switching back to use the standard farm works similarly:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/default/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/default/common/settings.csh
}}}
Line 25: Line 93:
It is important to request the right limit for memory with the parameter h_vmem. The machines have 3673204k of RAM and by default two jobs are executed on one node, so the maximal amount of memory is 1650M per process. Please make sure that your Gridengine certificates are in place:

{{{
[oreade38] ~ % ls -l $HOME/.sge/port537
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
 -> sge_qmaster
[oreade38] ~ % ls -l $HOME/.sge/cert.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/
}}}
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes:

{{{
#$ -pe pax 8
}}}
Be aware, that the allocation rule for the pax parallel environment may distribute a the processes on up to 8 nodes. To force a node-based allocation, use one of the numbered PEs, e.g. like this:

{{{
#$ -pe pax5 64
}}}
or for 16 processes per node on the latest hardware:

{{{
#$ -pe pax9 256
}}}
Bugs in the batch system implementation made using wild card selection of PEs impossible, be aware that {{{-pe pax?}}} is rewritten as {{{-pe pax}}} automatically.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit on SL6, use

{{{
/usr/lib64/openmpi/bin/mpirun -np $NSLOTS yourapp
}}}
The MPI runtime will automatically select the right network type.

You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

{{{
#$ -pe pax 128

/usr/lib64/openmpi/bin/mpirun -np $NSLOTS yourapp
}}}
Finally, here's a list of common pitfalls when using the pax batch system:

 * Please be aware that all requested resources (via the '''-l''' qsub switch) are meant '''per job slot'''. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3500 MB h_vmem in your job scripts. Otherwise your job won't start! Please make sure your MPI processes don't use more than 3GB per slot, the memory overcommittment should be used for mpirun overhead for large jobs (>=512 slots) only.
 * /!\ If your MPI application relies on LD_LIBRARY_PATH to load its shared libraries or modules, this will fail on remote notes, as the batch system will remove this variable from the environment. In that case you'll have to wrap ''yourapp'' in a shell script that sets up the environment and calls your binary application.

=== Mvapich2 ===
With mvapich2 1.7, there is working integration into the SGE batch system. Just use a command like this:

{{{
#$ -pe pax 128

/usr/lib64/mvapich2/bin/mpiexec -n $NSLOTS yourapp
}}}
== SL6 changes ==
As the versions and paths of the MPI implementations have changed, programs are not compatible between SL5 and SL6. You must rebuild your application on SL6. You'll also have to rebuild your application on SL6.6, as it contains another incompatible version of mvapich2.

The 'ini' command is no longer in use for selecting MPI versions, it was replaced by the very similar 'module'. The command 'module avail' lists the installed modules. To load Open-MPI for the Intel compiler, use the command 'module add openmpi-x86_64-intel'.
Line 28: Line 154:
Line 31: Line 156:
Be aware that the batch system renews the AFS token, but only on the node that starts the first process (node 0). That's why you should access the AFS from that node. An example scenario looks like this: == BLAS library ==
Both ATLAS and GotoBLAS are available.
Line 33: Line 159:
 1. Copy data from AFS to node 0.
 1. Copy it with scp to the nodes that need it to the directory $TMPDIR, the machine names are in $TMPDIR/machines
 1. Run your MPI job.
 1. Copy the results with scp from the local discs to node 0.
 1. Copy the data from node 0 to AFS.
 * ATLAS is in /opt/products/atlas

 * libgoto is in /usr/lib or /usr/lib64 respectively.

== Monitoring ==
Ganglia provides a web monitoring interface. These pages are only available from the internal network.

[[http://ganglia.zeuthen.desy.de/ganglia/?c=Parallel%20Clusters&m=load_one&r=hour&s=descending&hc=4&mc=2|interactive machines]] [[http://ganglia.zeuthen.desy.de/ganglia/?c=Gridengine%20PAX%20Farm&m=load_one&r=hour&s=descending&hc=4&mc=2|parallel batch machines]]

== Further documentation ==
[[http://www-zeuthen.desy.de/technisches_seminar/texte/waschk_20100427.pdf|Paralleles Rechnen in Zeuthen - die neuen Cluster]] , 04/27/10, technical seminar

[[http://www-zeuthen.desy.de/technisches_seminar/texte/Technisches_Seminar_Waschk.pdf|HPC-Clusters at DESY Zeuthen]] , 11/22/06, technical seminar

Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are 9 dedicated parallel clusters (blade centers) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Hardware

The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 15 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8e to build and test your programs. Please don't use these machines to run long production code, use the batch system instead.

The batch part consists of 8 blade centers with 16 nodes each, connected via a QDR infiniband network.

Building Applications

All cluster machines have been migrated to SL6. On SL6, use the 'module' command to add one of the MPI implementations to your path.

Openmpi

SL6.6 ships with openmpi 1.8.1. The paths to the runtime have changed from SL5. Also, you must rebuild your application for the new ABI. The paths to the openmpi versions are:

/usr/lib/openmpi/bin
/usr/lib64/openmpi/bin
/usr/lib64/openmpi-intel/bin

Instead of 'ini', please use the 'module' command to add a MPI compiler to your path, e.g.  module add openmpi-x86_64-intel .

Building applications

Build your application on any SL6 workgroup server, e.g. the pax8 machines pax80 to pax8e or the machine sl6.

Running your application interactively on pax8

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8b slots=8
pax8c slots=8
pax8d slots=8
pax8e slots=8

The command line would look like this:

/usr/lib64/openmpi/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich2

Two additional mpi implementations are installed on all pax machines, one GCC and one Intel compiler version.

The paths have changed from SL5 to

/usr/lib64/mvapich2/bin
/usr/lib64/mvapich2-intel/bin

Building and running programs interactively

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. To run it outside the batch system, follow these instructions: http://mvapich.cse.ohio-state.edu/overview/mvapich2/

Applications built with mvapich2 will only run on machines with Infiniband hardware, so they will work on the pax machines but not on desktops, workgroup servers or the farm.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec.

Batch System Access

/!\ ATTENTION: The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call ini pax.

Alternatively source a script:

  • zsh users:
    [oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
  • tcsh users:
    [oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
    Switching back to use the standard farm works similarly:
  • zsh users:
    [oreade38] ~ % . /usr/gridengine/default/common/settings.sh
  • tcsh users:
    [oreade38] ~ $ source /usr/gridengine/default/common/settings.csh

Please make sure that your Gridengine certificates are in place:

[oreade38] ~ % ls -l $HOME/.sge/port537
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
 -> sge_qmaster
[oreade38] ~ % ls -l $HOME/.sge/cert.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/

A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes:

#$ -pe pax 8

Be aware, that the allocation rule for the pax parallel environment may distribute a the processes on up to 8 nodes. To force a node-based allocation, use one of the numbered PEs, e.g. like this:

#$ -pe pax5 64

or for 16 processes per node on the latest hardware:

#$ -pe pax9 256

Bugs in the batch system implementation made using wild card selection of PEs impossible, be aware that -pe pax? is rewritten as -pe pax automatically.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit on SL6, use

/usr/lib64/openmpi/bin/mpirun -np $NSLOTS yourapp

The MPI runtime will automatically select the right network type.

You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

#$ -pe pax 128

/usr/lib64/openmpi/bin/mpirun -np $NSLOTS yourapp

Finally, here's a list of common pitfalls when using the pax batch system:

  • Please be aware that all requested resources (via the -l qsub switch) are meant per job slot. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3500 MB h_vmem in your job scripts. Otherwise your job won't start! Please make sure your MPI processes don't use more than 3GB per slot, the memory overcommittment should be used for mpirun overhead for large jobs (>=512 slots) only.

  • /!\ If your MPI application relies on LD_LIBRARY_PATH to load its shared libraries or modules, this will fail on remote notes, as the batch system will remove this variable from the environment. In that case you'll have to wrap yourapp in a shell script that sets up the environment and calls your binary application.

Mvapich2

With mvapich2 1.7, there is working integration into the SGE batch system. Just use a command like this:

#$ -pe pax 128

/usr/lib64/mvapich2/bin/mpiexec -n $NSLOTS yourapp

SL6 changes

As the versions and paths of the MPI implementations have changed, programs are not compatible between SL5 and SL6. You must rebuild your application on SL6. You'll also have to rebuild your application on SL6.6, as it contains another incompatible version of mvapich2.

The 'ini' command is no longer in use for selecting MPI versions, it was replaced by the very similar 'module'. The command 'module avail' lists the installed modules. To load Open-MPI for the Intel compiler, use the command 'module add openmpi-x86_64-intel'.

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

BLAS library

Both ATLAS and GotoBLAS are available.

  • ATLAS is in /opt/products/atlas
  • libgoto is in /usr/lib or /usr/lib64 respectively.

Monitoring

Ganglia provides a web monitoring interface. These pages are only available from the internal network.

interactive machines parallel batch machines

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)