Differences between revisions 102 and 104 (spanning 2 versions)
Revision 102 as of 2017-12-06 12:05:29
Size: 10010
Editor: GötzWaschk
Comment:
Revision 104 as of 2017-12-15 14:12:15
Size: 9892
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 25: Line 25:
Use the 'module' command to add one of the MPI implementations to your path:
||module name||version||compiler version||origin||
||openmpi-1.8-x86_64||1.8.1||gcc 4.4.7||Red Hat build||
||openmpi-1.10-x86_64||1.10.2||gcc 4.4.7||Red Hat build||
||openmpi-x86_64-intel||1.8.1||icc 15.0.1||self-maintained||
||openmpi-1.10-x86_64-intel||1.10.2||icc 17.0.3||self-maintained||
||mvapich2-x86_64||2.0rc1||gcc 4.4.7||Red Hat build||
||mvapich2-x86_64-intel||2.0rc1||icc 17.0.3||self-maintained||
Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:
{{{
module add gnu mvapich2
}}}
OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first.
||module name ||version ||depends on ||
||gnu ||5.4.0 || ||
||gnu7||7.2.0 || ||
||intel ||18.0.0 || ||
||openmpi ||1.10.6 ||gnu ||
||openmpi ||1.10.7 ||intel ||
||openmpi3||3.0.0||gnu7/intel||
||mvapich2 ||2.2 ||gnu/gnu7/intel ||
||opencoarrays ||1.8.5 || ||
Line 35: Line 41:
=== Openmpi ===
We ship with several versions that are not binary compatible. Be sure to use the right runtime. The paths are:
=== Building applications and interactively running them ===
Build your application on any SL7 workgroup server, e.g. the pax8 machines pax80 to pax8f or the machine sl7.
Line 38: Line 44:
{{{
/usr/lib64/openmpi/bin
/usr/lib64/openmpi-1.10/bin
/usr/lib64/openmpi-1.10-intel/bin
/usr/lib64/openmpi-intel/bin
}}}
Instead of 'ini', please use the 'module' command to add a MPI compiler to your path, e.g. {{{ module add openmpi-x86_64-intel}}} .

==== Building applications ====
Build your application on any SL6 workgroup server, e.g. the pax8 machines pax80 to pax8f or the machine sl6.

==== Running your application interactively on pax8 ====
==== OpenMPI ====
Line 61: Line 56:
/usr/lib64/openmpi/bin/mpirun -np 32 -machinefile ./machinefile ./program /opt/ohpc/pub/mpi/openmpi-gnu/1.10.6/bin/mpirun -np 32 -machinefile ./machinefile ./program
Line 64: Line 59:
=== Mvapich2 ===
Two additional MPI implementations are installed on all pax machines, one GCC and one Intel compiler version.
Line 67: Line 60:
The paths are

{{{
/usr/lib64/mvapich2/bin
/usr/lib64/mvapich2-intel/bin
}}}
==== Building and running programs interactively ====
==== Mvapich2 ====
Line 90: Line 77:
/!\ '''ATTENTION''': The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call `ini pax`. /!\ '''ATTENTION''': The PAX is now based on the SLURM scheduling system.
=== Slurm Commands ===
The most important commands:
||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster ||
||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list ||
||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution ||
||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job ||
||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands ||
||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job ||
||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information ||
Line 92: Line 88:
Alternatively source a script: === Allocation ===
Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun.
=== Parallel Execution ===
Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.
Line 94: Line 93:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
=== MPI Support ===
Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}.

=== Job scripts ===
Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are:
||-J ||job name ||
||--get-user-env ||copy environment variables ||
||-n ||number of cores ||
||-N ||number of nodes ||
||-t ||run time of the job, default is 30 minutes ||
||-A ||account, default the same as UNIX group ||
||-p ||partition of the cluster ||
||--mail-type ||configure email notifications, e.g. use --mail-type=ALL ||


==== Time format ====
The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.

==== Examples ====
An example job script is in [[attachment:slurm-mpi.job]]

=== Accounting ===
The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option.


=== Local Disk Space ===
Each node has a local directory /scratch with up to 1TB of space. It is cleared automatically at the end of the job.

=== pax10 and pax11 I/O nodes ===
Most of the pax10 and pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines each in the pax10 and pax11 partitions are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: {{{ --constraint=10g*1}}}. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.
== SL7 changes ==
As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.

The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available module, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after {{{module add gnu7}}}.
==== Running EL6 software using Singularity ====
It is possible to run software built on EL6 in a [[Singularity]] container. This works with mvapich2 binaries by calling singularity in the batch script like this:
{{{
mpiexec singularity exec /project/singularity/images/SL6.img yourbinary
Line 98: Line 132:
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.


For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:
{{{
singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6
Line 102: Line 139:
 Switching back to use the standard farm works similarly:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/default/common/settings.sh
and in the job script:
{{{
module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6
Line 107: Line 144:
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/default/common/settings.csh
}}}

Please make sure that your Gridengine certificates are in place:

{{{
[oreade38] ~ % ls -l $HOME/.sge/port6443
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
 -> sge_qmaster
[oreade38] ~ % ls -l $HOME/.sge/cert.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/
}}}
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes:

{{{
#$ -pe pax 8
}}}
Be aware, that the allocation rule for the pax parallel environment may distribute a the processes on up to 8 nodes. To force a node-based allocation, use one of the numbered PEs, e.g. like this:

{{{
#$ -pe pax5 64
}}}
or for 16 processes per node on the latest hardware:

{{{
#$ -pe pax10 512
}}}
Bugs in the batch system implementation made using wild card selection of PEs impossible, be aware that {{{-pe pax?}}} is rewritten as {{{-pe pax}}} automatically.

Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit on SL6, use

{{{
/usr/lib64/openmpi/bin/mpirun -np $NSLOTS yourapp
}}}
The MPI runtime will automatically select the right network type.

You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

{{{
#$ -pe pax 128

/usr/lib64/openmpi/bin/mpirun -np $NSLOTS yourapp
}}}

Finally, here's a list of common pitfalls when using the pax batch system:

 * Please be aware that all requested resources (via the '''-l''' qsub switch) are meant '''per job slot'''. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3 GB h_rss in your job scripts. Otherwise your job won't start! Please make sure your MPI processes don't use more than 3GB per slot, the memory overcommittment should be used for mpirun overhead for large jobs (>=512 slots) only.
 * /!\ If your MPI application relies on LD_LIBRARY_PATH to load its shared libraries or modules, this will fail on remote notes, as the batch system will remove this variable from the environment. In that case you'll have to wrap ''yourapp'' in a shell script that sets up the environment and calls your binary application.

=== pax10 I/O nodes ===
Most of the pax10 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines are equipped with 10GB/s Ethernet instead. To access them, specify {{{#$ -masterq pax10-master.q}}} . That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.
=== Hybrid Openmp/MPI jobs ===
Jobs that use both Openmp threads and MPI for communication must not run more threads than the number of physical cores. To run 4 threads and 2 MPI processes on two nodes, use this command line:
{{{
#$ -pe pax 4
#$ -l h_rss=12G
export OMP_NUM_THREADS=4
/usr/lib64/openmpi/bin/mpirun -np 8 -machinefile pax8e-f -map-by socket:PE=2 mpi-program
}}}
If your Openmp program was built with the Intel compiler, you must run a wrapper script instead of the MPI binary which sets the LD_LIBRARY_PATH variable to the Intel compiler home, you cannot do that in the job script:
{{{export LD_LIBRARY_PATH=/opt/intel/2015/lib/intel64:$LD_LIBRARY_PATH}}}
=== Mvapich2 ===
With mvapich2 1.7, there is working integration into the SGE batch system. Just use a command like this:

{{{
#$ -pe pax 128

/usr/lib64/mvapich2/bin/mpiexec -env MV2_USE_RDMA_CM 1 -n $NSLOTS yourapp
}}}
== SL6 changes ==
As the versions and paths of the MPI implementations have changed, programs are not compatible between SL5 and SL6. You must rebuild your application on SL6. You'll also have to rebuild your application on SL6.6, as it contains another incompatible version of mvapich2.

The 'ini' command is no longer in use for selecting MPI versions, it was replaced by the very similar 'module'. The command 'module avail' lists the installed modules. To load Open-MPI for the Intel compiler, use the command 'module add openmpi-x86_64-intel'.
Line 193: Line 152:
== Known Issues ==
 1. Openmpi3 has a bug that makes the program hang in certain situations: https://github.com/open-mpi/ompi/issues/3251 https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html
 1. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only).

/!\ This web page will no longer be updated. Please use this link for current information.



Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are 3 dedicated parallel clusters (blade centers, Miriquid compute nodes) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch System Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Hardware

The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 16 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8f to build and test your programs. Please don't use these machines to run long production code, use the batch system instead.

The batch part consists of three separate partitions that are not interconnected: pax11 and pax10 each consist of 32 compute nodes, connected via a FDR Infiniband network.The older system is pax9, 16 nodes connected by a QDR Infiniband network.

Nodes

All nodes have two CPUs (sockets).

Name

CPU

Memory

pax9[0-f]

Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz

48G

pax10-[00-31]

Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz

64G

pax11-[00-31]

Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz

128G

Building Applications

Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:

module add gnu mvapich2

OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.

module name

version

depends on

gnu

5.4.0

gnu7

7.2.0

intel

18.0.0

openmpi

1.10.6

gnu

openmpi

1.10.7

intel

openmpi3

3.0.0

gnu7/intel

mvapich2

2.2

gnu/gnu7/intel

opencoarrays

1.8.5

Building applications and interactively running them

Build your application on any SL7 workgroup server, e.g. the pax8 machines pax80 to pax8f or the machine sl7.

OpenMPI

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8

The command line would look like this:

/opt/ohpc/pub/mpi/openmpi-gnu/1.10.6/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich2

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec, e.g.:

/usr/lib64/mvapich2-intel/bin/mpiexec -n 4 -machinefile ./machinefile /usr/lib64/mvapich2-intel/bin/mpitests-IMB-MPI1

Batch System Access

/!\ ATTENTION: The PAX is now based on the SLURM scheduling system.

Slurm Commands

The most important commands:

sinfo

Information about the cluster

squeue

Show current job list

srun

Parallel command execution

sbatch

Submit a batch job

salloc

Reserve ressources for interactive commands

scancel

Abort a job

sacct

Show accounting information

Allocation

Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or srun.

Parallel Execution

Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.

MPI Support

Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. module add intel openmpi.

Job scripts

Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:

-J

job name

--get-user-env

copy environment variables

-n

number of cores

-N

number of nodes

-t

run time of the job, default is 30 minutes

-A

account, default the same as UNIX group

-p

partition of the cluster

--mail-type

configure email notifications, e.g. use --mail-type=ALL

Time format

The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.

Examples

An example job script is in slurm-mpi.job

Accounting

The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.

Local Disk Space

Each node has a local directory /scratch with up to 1TB of space. It is cleared automatically at the end of the job.

pax10 and pax11 I/O nodes

Most of the pax10 and pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines each in the pax10 and pax11 partitions are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm:  --constraint=10g*1. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.

SL7 changes

As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.

The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available module, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after module add gnu7.

Running EL6 software using Singularity

It is possible to run software built on EL6 in a Singularity container. This works with mvapich2 binaries by calling singularity in the batch script like this:

mpiexec singularity exec /project/singularity/images/SL6.img yourbinary

However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.

For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:

singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6

and in the job script:

module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.

Monitoring

Ganglia provides a web monitoring interface. These pages are only available from the internal network.

interactive machines parallel batch machines

Known Issues

  1. Openmpi3 has a bug that makes the program hang in certain situations: https://github.com/open-mpi/ompi/issues/3251 https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html

  2. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)