Differences between revisions 1 and 103 (spanning 102 versions)
Revision 1 as of 2006-02-20 15:33:34
Size: 1710
Editor: GötzWaschk
Comment:
Revision 103 as of 2017-12-11 15:33:25
Size: 6824
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
/!\ '''This web page will no longer be updated.''' Please use this link for [[https://dv-zeuthen.desy.de/services/parallel_computing/|current information]].
----
<<BR>>
Line 2: Line 5:
<<TableOfContents>>
Line 3: Line 7:
At Zeuthen a cluster of 16 dual opteron machines is available. It is integrated into the SGE batch system. The documentation in ["Batch System Usage"] applies to it. == Introduction ==
There are 3 dedicated parallel clusters (blade centers, Miriquid compute nodes) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[https://dv-zeuthen.desy.de/services/batch/|Batch System Usage]] applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster'''

== Hardware ==
The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 16 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8f to build and test your programs. Please don't use these machines to run long production code, use the batch system instead.

The batch part consists of three separate partitions that are not interconnected: pax11 and pax10 each consist of 32 compute nodes, connected via a FDR Infiniband network.The older system is pax9, 16 nodes connected by a QDR Infiniband network.

=== Nodes ===
All nodes have two CPUs (sockets).
||Name||CPU||Memory||
||pax9[0-f]||Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz||48G||
||pax10-[00-31]||Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz||64G||
||pax11-[00-31]||Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz||128G||
Line 6: Line 25:

Applications for the cluster must be compiled on a 64 bit machine, at the moment, this means either lx64 or linfini. There are MPI versions for the GCC, Intel and PGI compilers installed:

/usr/local/ibgd/mpi/osu/gcc/mvapich-0.9.5/bin/mpicc

/usr/local/ibgd/mpi/osu/intel/mvapich-0.9.5/bin/mpicc

/usr/local/ibgd/mpi/osu/pgi/mvapich-0.9.5/bin/mpicc
Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:
{{{
module add gnu mvapich2
}}}
OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first.
||module name ||version ||depends on ||
||gnu ||5.4.0 || ||
||gnu7||7.2.0 || ||
||intel ||18.0.0 || ||
||openmpi ||1.10.6 ||gnu ||
||openmpi ||1.10.7 ||intel ||
||openmpi3||3.0.0||gnu7/intel||
||mvapich2 ||2.2 ||gnu/gnu7/intel ||
||opencoarrays ||1.8.5 || ||
Line 16: Line 41:
Compilers for C++ and Fortran are available as well. ==== Building applications ====
Build your application on any SL7 workgroup server, e.g. the pax8 machines pax80 to pax8f or the machine sl7.

==== Running your application interactively on pax8 ====
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

{{{
pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8
}}}
The command line would look like this:

{{{
/opt/ohpc/pub/mpi/openmpi-gnu/1.10.6/bin/mpirun -np 32 -machinefile ./machinefile ./program
}}}
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

==== Building and running programs interactively ====
To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

{{{
pax88
pax89
pax88
pax89
}}}
The preferred way to run a application with mvapich2 is mpiexec, e.g.:
{{{
/usr/lib64/mvapich2-intel/bin/mpiexec -n 4 -machinefile ./machinefile /usr/lib64/mvapich2-intel/bin/mpitests-IMB-MPI1
}}}

== Batch System Access ==
/!\ '''ATTENTION''': The PAX is now based on the SLURM scheduling system.
===== Local Disk Space =====
Each node has a local directory /scratch with 1TB of space. It is cleared automatically at the end of the job.

===== pax10 and pax11 I/O nodes =====
Most of the pax10 and pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines each in the pax10 and pax11 partitions are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: {{{ --constraint=10g*1}}}. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.
== SL7 changes ==
As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.

The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available module, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after {{{module add gnu7}}}.
==== Running EL6 software using Singularity ====
It is possible to run software built on EL6 in a [[Singularity]] container. This works with mvapich2 binaries by calling singularity in the batch script like this:
{{{
mpiexec singularity exec /project/singularity/images/SL6.img yourbinary
}}}
However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.
Line 19: Line 95:
== Batch System Access == For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:
{{{
singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6
}}}
and in the job script:
{{{
module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6
}}}
== AFS Access ==
The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.
Line 21: Line 107:
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this: == Monitoring ==
Ganglia provides a web monitoring interface. These pages are only available from the internal network.
Line 23: Line 110:
#$ -pe mpich-ppn2 4 [[http://ganglia.zeuthen.desy.de/ganglia/?c=Parallel%20Clusters&m=load_one&r=hour&s=descending&hc=4&mc=2|interactive machines]] [[http://ganglia.zeuthen.desy.de/ganglia/?c=Gridengine%20PAX%20Farm&m=load_one&r=hour&s=descending&hc=4&mc=2|parallel batch machines]]
Line 25: Line 112:
It is important to request the right limit for memory with the parameter h_vmem. The machines have 3673204k of RAM and by default two jobs are executed on one node, so the maximal amount of memory is 1650M per process. == Further documentation ==
[[http://www-zeuthen.desy.de/technisches_seminar/texte/waschk_20100427.pdf|Paralleles Rechnen in Zeuthen - die neuen Cluster]] , 04/27/10, technical seminar
Line 27: Line 115:
== AFS Access ==

The application binary must be available to all nodes, that's why it should be placed in an AFS directory.

Be aware that the batch system renews the AFS token, but only on the node that starts the first process (node 0). That's why you should access the AFS from that node. An example scenario looks like this:

 1. Copy data from AFS to node 0.
 1. Copy it with scp to the nodes that need it to the directory $TMPDIR, the machine names are in $TMPDIR/machines
 1. Run your MPI job.
 1. Copy the results with scp from the local discs to node 0.
 1. Copy the data from node 0 to AFS.
[[http://www-zeuthen.desy.de/technisches_seminar/texte/Technisches_Seminar_Waschk.pdf|HPC-Clusters at DESY Zeuthen]] , 11/22/06, technical seminar

/!\ This web page will no longer be updated. Please use this link for current information.



Usage of the Linux Clusters at DESY Zeuthen

Introduction

There are 3 dedicated parallel clusters (blade centers, Miriquid compute nodes) available for running parallel applications, but you can also run parallel MPI jobs in the SGE farm. The documentation in Batch System Usage applies there.

For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster

Hardware

The PAX cluster consists of an interactive and a batch part. The interactive part is a blade center with 16 blade servers configured as workgroup servers. You can interactively log into the machines pax80 to pax8f to build and test your programs. Please don't use these machines to run long production code, use the batch system instead.

The batch part consists of three separate partitions that are not interconnected: pax11 and pax10 each consist of 32 compute nodes, connected via a FDR Infiniband network.The older system is pax9, 16 nodes connected by a QDR Infiniband network.

Nodes

All nodes have two CPUs (sockets).

Name

CPU

Memory

pax9[0-f]

Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz

48G

pax10-[00-31]

Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz

64G

pax11-[00-31]

Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz

128G

Building Applications

Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:

module add gnu mvapich2

OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.

module name

version

depends on

gnu

5.4.0

gnu7

7.2.0

intel

18.0.0

openmpi

1.10.6

gnu

openmpi

1.10.7

intel

openmpi3

3.0.0

gnu7/intel

mvapich2

2.2

gnu/gnu7/intel

opencoarrays

1.8.5

Building applications

Build your application on any SL7 workgroup server, e.g. the pax8 machines pax80 to pax8f or the machine sl7.

Running your application interactively on pax8

To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:

pax8a slots=8
pax8b slots=8
pax8c slots=8
pax8d slots=8

The command line would look like this:

/opt/ohpc/pub/mpi/openmpi-gnu/1.10.6/bin/mpirun -np 32 -machinefile ./machinefile  ./program

More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/

Building and running programs interactively

To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.

The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:

pax88
pax89
pax88
pax89

The preferred way to run a application with mvapich2 is mpiexec, e.g.:

/usr/lib64/mvapich2-intel/bin/mpiexec -n 4 -machinefile ./machinefile /usr/lib64/mvapich2-intel/bin/mpitests-IMB-MPI1

Batch System Access

/!\ ATTENTION: The PAX is now based on the SLURM scheduling system.

Local Disk Space

Each node has a local directory /scratch with 1TB of space. It is cleared automatically at the end of the job.

pax10 and pax11 I/O nodes

Most of the pax10 and pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines each in the pax10 and pax11 partitions are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm:  --constraint=10g*1. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity.

SL7 changes

As the versions and paths of the MPI implementations have changed, programs are not compatible between SL6 and SL7. You should rebuild your application on SL7, but you could also try singularity.

The 'module' command was replaced by a different, more powerful implementation called lmod. It doesn't list all available module, instead it supports dependent modules, e.g. the MPI implementations build with 'gnu7' are shown after module add gnu7.

Running EL6 software using Singularity

It is possible to run software built on EL6 in a Singularity container. This works with mvapich2 binaries by calling singularity in the batch script like this:

mpiexec singularity exec /project/singularity/images/SL6.img yourbinary

However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs.

For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container:

singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6

and in the job script:

module add gnu7 openmpi3 prun
prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6

AFS Access

The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.

Monitoring

Ganglia provides a web monitoring interface. These pages are only available from the internal network.

interactive machines parallel batch machines

Further documentation

Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar

HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar

Cluster (last edited 2023-04-28 09:56:09 by GötzWaschk)