Differences between revisions 66 and 160 (spanning 94 versions)
Revision 66 as of 2013-03-08 12:30:44
Size: 8327
Editor: GötzWaschk
Comment:
Revision 160 as of 2023-04-28 09:56:09
Size: 11571
Editor: GötzWaschk
Comment: readd documentation for pax11
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
/!\ '''This web page will no longer be updated.''' Please use this link for [[https://dv-zeuthen.desy.de/services/parallel_computing/|current information]].
----
<<BR>>
Line 5: Line 8:
There are 10 dedicated parallel clusters (blade centers) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there. There are two dedicated parallel clusters available for running parallel applications, but you can also run parallel MPI jobs in the HTCondor farm. The documentation in [[https://dv-zeuthen.desy.de/services/batch/|Batch System Usage]] applies there.
Line 10: Line 13:
The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 16 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8f to build and test your programs. Please don't use these machines to run long production code, use the batch system instead. The batch system consists of two partition: pax12 (rome) has 16 nodes and HDR Infiniband. pax11 (broadwell) consists of 30 compute nodes, connected via a FDR Infiniband network.
=== Nodes ===
The AMD machines (pax12) have one socket, the Intel machines (pax11) have two.
||Name||CPU||Code Name||Cores||Memory||
||pax12-[00-15]||AMD EPYC 7702P 64-Core Processor @ 2GHz||Rome||64||256G||
||pax11-[00-31]||Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz||Broadwell||16||128G||
Line 12: Line 20:
The batch part consists of 8 blade centers with 16 nodes each, connected via a QDR infiniband network.

Then there's also one separate blade center that isn't connected to the others but can run self-contained parallel jobs.
== Software Environment ==
The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the {{{module}}} command. To build on any machine in the right environment, run the {{{/project/apptainer/images/pax.img}}} image. You can submit your jobs if you run the apptainer container on a EL7 WGS like this:
{{{
apptainer run -B /etc/passwd /project/apptainer/images/pax.img
}}}
Line 18: Line 27:
On SL5, you can use 'ini' to add a MPI runtime to the path. On SL6, use the 'module' command.
=== Openmpi ===
Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed.
Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:
{{{
module add gnu mvapich2
}}}
OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first.
||module name ||version ||depends on ||
||gnu ||5.4.0 || ||
||gnu7||7.3.0 || ||
||gnu8||8.3.0 || ||
||gnu9||9.3.0 || ||
||gnu12||12.2.0|| ||
||intel ||2021.4|| ||
||hdf5||1.10.1||gnu||
||openmpi ||1.10.7 ||gnu/intel ||
||openmpi3||3.1.0||gnu7||
||openmpi3||3.1.4||gnu8/intel||
||openmpi4||4.0.5||gnu8/gnu9/gnu12/intel||
||mvapich2 ||2.2 ||gnu/gnu7 ||
||mvapich2 || 2.3.2||gnu8/intel||
||impi||2021.4||gnu/gnu8/intel||
||opencoarrays ||1.8.11 || ||
||opencoarrays||2.3.1||gnu7 openmpi3||
||opencoarrays||2.8.0||gnu8 openmpi3||
Line 22: Line 51:
==== SL5 ====
SL5 ships with openmpi 1.4 in 32 bit and 64 bit versions. For 64 bit applications use the installation in /usr/lib64/openmpi/1.4-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.4-gcc/bin .
=== Interactive tests ===
You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: {{{salloc -p rome -N 2 -c 2}}}. To get an interactive shell on the allocated machines, use the command {{{srun --pty bash}}}.
Line 25: Line 54:
Additional openmpi versions are installed to support the Intel and PGI compilers:

{{{
/usr/lib64/openmpi/1.4-icc/bin
/usr/lib64/openmpi-1.3.2-pgi/bin
}}}
If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine.

==== SL6 ====
SL6 ships with openmpi 1.5.4. The paths to the runtime have changed from SL5. Also, you must rebuild your application for the new ABI. The paths to the openmpi versions are:

{{{
/usr/lib/openmpi/bin
/usr/lib64/openmpi/bin
/usr/lib64/openmpi-intel/bin
}}}
==== Building applications ====
64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.

==== Running your application ====
==== OpenMPI ====
Line 48: Line 58:
pax8a slots=8
pax8b slots=8
Line 50: Line 62:
pax8e slots=8
pax8f slots=8
Line 56: Line 66:
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np 32 -machinefile ./machinefile ./program /opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile ./program
Line 60: Line 70:
=== Mvapich2 ===
Two additional mpi implementations are installed on all pax machines:
==== Mvapich2 ====
To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.
Line 63: Line 73:
{{{
/usr/lib64/mvapich2/1.7-gcc/bin
/usr/lib64/mvapich2/1.7-intel/bin
}}}
To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. To run it outside the batch system, follow these instructions: http://mvapich.cse.ohio-state.edu/overview/mvapich2/

Applications built with mvapich2 will only run on machines with Infiniband hardware, so they will work on the pax machines but not on desktops, workgroup servers or the farm.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax09 and pax08:
The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:
Line 79: Line 81:
The preferred way to run a application with mvapich2 is mpiexec.

If you want to use the deprecated mpd startup method as opposed to mpirun_rsh, you must also first create the file ~/.mpd.conf that contains of one line like this:
The preferred way to run a application with mvapich2 is mpiexec, e.g.:
Line 84: Line 83:
MPD_SECRETWORD=password
}}}
== Batch System Access ==
/!\ '''ATTENTION''': The PAX cluster was split off the normal Zeuthen batch. To access the PAX batch system you will need to call `ini pax`.

Alternatively source a script:

 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/pax/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/pax/common/settings.csh
}}}
 Switching back to use the standard farm works similarly:
 * zsh users:
 {{{
[oreade38] ~ % . /usr/gridengine/default/common/settings.sh
}}}
 * tcsh users:
 {{{
[oreade38] ~ $ source /usr/gridengine/default/common/settings.csh
/opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1
Line 109: Line 86:
Please make sure that your Gridengine certificates are in place: ==== Intel MPI ====
To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this:
{{{
export FI_PROVIDER=verbs
}}}
In a Slurm job, please use the prun wrapper to start your application.
Line 111: Line 93:
== Batch System Access ==
/!\ '''ATTENTION''': The PAX is now based on the SLURM scheduling system.
=== Slurm Commands ===
The most important commands:
||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster ||
||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list ||
||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution ||
||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job ||
||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands ||
||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job ||
||[[https://slurm.schedmd.com/sview.html|sview]]||Graphical user interface to view and modify Slurm state||
||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information ||

=== Allocation ===
Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun.
=== Parallel Execution ===
Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.

=== MPI Support ===
Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}.

=== Job scripts ===
Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are:
||-J ||job name ||
||--get-user-env ||copy environment variables ||
||-n ||number of cores ||
||-N ||number of nodes ||
||-t ||run time of the job, default is 30 minutes ||
||-A ||account, default the same as UNIX group ||
||-p ||partition of the cluster ||
||--mail-type ||configure email notifications, e.g. use --mail-type=ALL ||

Be careful with {{{--get-user-env}}}, it will also copy loaded modules to the job.

==== Time format ====
The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.

==== Examples ====
An example job script is in [[attachment:slurm-mpi.job]]

=== Accounting ===
The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option.


=== Local Disk Space ===
Each node has a local directory /scratch with up to 770GB of space. It is cleared automatically at the end of the job.

=== I/O nodes ===
Most of the pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines in the pax11 partition are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: {{{ --constraint=10g*1}}}. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity. All pax12 machines have 10GB/s Ethernet as well.

=== Partitions and backfilling ===
The cluster consists of two partitions: rome has the faster machines and is the default, broadwell has the older nodes. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes.

==== Running EL6 software using Singularity ====
It is possible to run software built on EL6 in a [[Apptainer]] container. This works with mvapich2 binaries by calling Apptainer in the batch script like this:
Line 112: Line 149:
[oreade38] ~ % ls -l $HOME/.sge/port537
lrwxr-xr-x. 1 ahaupt sysprog 11 Aug 20 09:52 /afs/
 -> sge_qmaster
[oreade38] ~ % ls -l $HOME/.sge/cert.pem
-rw-------. 1 ahaupt sysprog 1464 Aug 20 09:52 /afs/
[oreade38] ~ % ls -l $HOME/.sge/key.pem
-rw-------. 1 ahaupt sysprog 887 Aug 20 09:52 /afs/
mpiexec apptainer exec /project/singularity/images/SL6.img yourbinary
Line 120: Line 151:
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots for 8 MPI processes on a single node: However, Mvapich2 2.2 isn't optimized yet for Apptainer, so this is slower than running native programs.
Line 122: Line 153:

For Openmpi, Singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 Singularity container:
Line 123: Line 156:
#$ -pe pax 8 singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6
Line 125: Line 158:
Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use
and in the job script:
Line 128: Line 160:
/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6
Line 130: Line 163:
The MPI runtime will automatically select the right network type.

You can request up to 1024 slots, as a blade center contains 128 CPU cores and the batch system contains 8 blade centers:

{{{
#$ -pe pax 128

/usr/lib64/openmpi/1.4-gcc/bin/mpirun -np $NSLOTS yourapp
}}}
Finally, here's a list of common pitfalls when using the pax batch system:

 * Please be aware that all requested resources (via the '''-l''' qsub switch) are meant '''per job slot'''. As the pax nodes only provide 24GB (8 core systems -> 3GB per job slot), you cannot request more than 3500 MB h_vmem in your job scripts. Otherwise your job won't start! Please make sure your MPI processes don't use more than 3GB per slot, the memory overcommittment should be used for mpirun overhead for large jobs (>=512 slots) only.
 * /!\ If your MPI application relies on LD_LIBRARY_PATH to load its shared libraries or modules, this will fail on remote notes, as the batch system will remove this variable from the environment. In that case you'll have to wrap ''yourapp'' in a shell script that sets up the environment and calls your binary application.

=== Mvapich2 ===
With mvapich2 1.7, there is working integration into the SGE batch system. Just use a command like this:

{{{
#$ -pe pax 128

/usr/lib64/mvapich2/1.7-intel/bin/mpiexec -n $NSLOTS yourapp
}}}

== SL6 changes ==
A part of the pax cluster has been migrated to SL6. To run a job on these machines, you must specify {{{-l os=sl6}}} for now. As the versions and paths of the MPI implementations have changed, programs are not compatible between SL5 and SL6, you must rebuild your application on SL6.

The 'ini' command is no longer in use for selecting MPI versions, it was replaced by the very similar 'module'. The command 'module avail' lists the installed modules. To load Open-MPI for the Intel compiler, use the command 'module add openmpi-x86_64-intel'.
== Additional Software ==
The software installation is based on the [[http://openhpc.community|OpenHPC project]]. We provide only a subset of the available software. If you need any of the other [[https://github.com/openhpc/ohpc/wiki/Component-List-v1.3.9|available components]], send a request to zn-cluster@desy.de
Line 159: Line 167:
The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

== BLAS library ==
Both ATLAS and GotoBLAS are available.

 * ATLAS is in /opt/products/atlas

 * libgoto is in /usr/lib or /usr/lib64 respectively.
The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.
Line 169: Line 170:
Ganglia provides a web monitoring interface. These pages are only available from the internal network. Ganglia provides a web monitoring interface. These page is only available from the internal network.
Line 171: Line 172:
[[http://ganglia.ifh.de/ganglia/?c=Parallel%20Clusters&m=load_one&r=hour&s=descending&hc=4&mc=2|interactive machines]]
[[http://ganglia.ifh.de/ganglia/?c=Gridengine
%20PAX%20Farm&m=load_one&r=hour&s=descending&hc=4&mc=2|parallel batch machines]]
[[http://ganglia.zeuthen.desy.de/ganglia/?c=Slurm%20PAX%20farm&m=load_one&r=hour&s=descending&hc=4&mc=2|Parallel Batch Machines]]
Line 174: Line 174:
== Known Issues ==
 1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead.
 1. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only).
 1. The command {{{sbcast}}} cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target.
 1. The {{{module}}} command might be unavailable for tcsh login shell users. As workaround, they can run {{{bash -l}}} and use the {{{--get-user-env}}} option in the job.
 1. There are some compatibility problems between third-party module files (e.g. Intel 2021) and the module command.
 1. In the pax apptainer image, `squeue` shows all users as ''nobody''. To work around, run `apptainer run -B /etc/passwd /project/apptainer/images/pax.img`

/!\ This web page will no longer be updated. Please use this link for current information.



Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are two dedicated parallel clusters available for running parallel applications, but you can also run parallel MPI jobs in the HTCondor farm. The documentation in Batch System Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Hardware

The batch system consists of two partition: pax12 (rome) has 16 nodes and HDR Infiniband. pax11 (broadwell) consists of 30 compute nodes, connected via a FDR Infiniband network.

Nodes

The AMD machines (pax12) have one socket, the Intel machines (pax11) have two.

Name

CPU

Code Name

Cores

Memory

pax12-[00-15]

AMD EPYC 7702P 64-Core Processor @ 2GHz

Rome

64

256G

pax11-[00-31]

Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz

Broadwell

16

128G

Software Environment

The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the module command. To build on any machine in the right environment, run the /project/apptainer/images/pax.img image. You can submit your jobs if you run the apptainer container on a EL7 WGS like this:

apptainer run -B /etc/passwd /project/apptainer/images/pax.img

Building Applications

Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:

module add gnu mvapich2

OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.

module name

version

depends on

gnu

5.4.0

gnu7

7.3.0

gnu8

8.3.0

gnu9

9.3.0

gnu12

12.2.0

intel

2021.4

hdf5

1.10.1

gnu

openmpi

1.10.7

gnu/intel

openmpi3

3.1.0

gnu7

openmpi3

3.1.4

gnu8/intel

openmpi4

4.0.5

gnu8/gnu9/gnu12/intel

mvapich2

2.2

gnu/gnu7

mvapich2

2.3.2

gnu8/intel

impi

2021.4

gnu/gnu8/intel

opencoarrays

1.8.11

opencoarrays

2.3.1

gnu7 openmpi3

opencoarrays

2.8.0

gnu8 openmpi3

Interactive tests

You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: salloc -p rome -N 2 -c 2. To get an interactive shell on the allocated machines, use the command srun --pty bash.

OpenMPI

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8

The command line would look like this:

/opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Mvapich2

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec, e.g.:

/opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1

Intel MPI

To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this:

export FI_PROVIDER=verbs

In a Slurm job, please use the prun wrapper to start your application.

Batch System Access

/!\ ATTENTION: The PAX is now based on the SLURM scheduling system.

Slurm Commands

The most important commands:

sinfo

Information about the cluster

squeue

Show current job list

srun

Parallel command execution

sbatch

Submit a batch job

salloc

Reserve ressources for interactive commands

scancel

Abort a job

sview

Graphical user interface to view and modify Slurm state

sacct

Show accounting information

Allocation

Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or srun.

Parallel Execution

Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.

MPI Support

Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. module add intel openmpi.

Job scripts

Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:

-J

job name

--get-user-env

copy environment variables

-n

number of cores

-N

number of nodes

-t

run time of the job, default is 30 minutes

-A

account, default the same as UNIX group

-p

partition of the cluster

--mail-type

configure email notifications, e.g. use --mail-type=ALL

Be careful with --get-user-env, it will also copy loaded modules to the job.

Time format

The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.

Examples

An example job script is in slurm-mpi.job

Accounting

The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.

Local Disk Space

Each node has a local directory /scratch with up to 770GB of space. It is cleared automatically at the end of the job.

I/O nodes

Most of the pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines in the pax11 partition are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm:  --constraint=10g*1. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity. All pax12 machines have 10GB/s Ethernet as well.

Partitions and backfilling

The cluster consists of two partitions: rome has the faster machines and is the default, broadwell has the older nodes. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes.

Running EL6 software using Singularity

It is possible to run software built on EL6 in a Apptainer container. This works with mvapich2 binaries by calling Apptainer in the batch script like this:

mpiexec apptainer exec /project/singularity/images/SL6.img yourbinary

However, Mvapich2 2.2 isn't optimized yet for Apptainer, so this is slower than running native programs.

For Openmpi, Singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 Singularity container:

singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6

and in the job script:

module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6

Additional Software

The software installation is based on the OpenHPC project. We provide only a subset of the available software. If you need any of the other available components, send a request to zn-cluster@desy.de

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.

Monitoring

Ganglia provides a web monitoring interface. These page is only available from the internal network.

Parallel Batch Machines

Known Issues

  1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead.

  2. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).

  3. The command sbcast cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target.

  4. The module command might be unavailable for tcsh login shell users. As workaround, they can run bash -l and use the --get-user-env option in the job.

  5. There are some compatibility problems between third-party module files (e.g. Intel 2021) and the module command.
  6. In the pax apptainer image, squeue shows all users as nobody. To work around, run apptainer run -B /etc/passwd /project/apptainer/images/pax.img

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)