/!\ '''This web page will no longer be updated.''' Please use this link for [[https://dv-zeuthen.desy.de/services/parallel_computing/|current information]]. ---- <
> = Usage of the Linux Clusters at DESY Zeuthen = <> == Introduction == There are two dedicated parallel clusters available for running parallel applications, but you can also run parallel MPI jobs in the HTCondor farm. The documentation in [[https://dv-zeuthen.desy.de/services/batch/|Batch System Usage]] applies there. For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <>. To get subscribed to that list, send an email to <> with the subject: '''subscribe zn-cluster''' == Hardware == The batch system consists of two partition: pax12 (rome) has 16 nodes and HDR Infiniband. pax11 (broadwell) consists of 30 compute nodes, connected via a FDR Infiniband network. === Nodes === The AMD machines (pax12) have one socket, the Intel machines (pax11) have two. ||Name||CPU||Code Name||Cores||Memory|| ||pax12-[00-15]||AMD EPYC 7702P 64-Core Processor @ 2GHz||Rome||64||256G|| ||pax11-[00-31]||Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz||Broadwell||16||128G|| == Software Environment == The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the {{{module}}} command. To build on any machine in the right environment, run the {{{/project/apptainer/images/pax.img}}} image. You can submit your jobs if you run the apptainer container on a EL7 WGS like this: {{{ apptainer run -B /etc/passwd /project/apptainer/images/pax.img }}} == Building Applications == Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.: {{{ module add gnu mvapich2 }}} OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first. ||module name ||version ||depends on || ||gnu ||5.4.0 || || ||gnu7||7.3.0 || || ||gnu8||8.3.0 || || ||gnu9||9.3.0 || || ||gnu12||12.2.0|| || ||intel ||2021.4|| || ||hdf5||1.10.1||gnu|| ||openmpi ||1.10.7 ||gnu/intel || ||openmpi3||3.1.0||gnu7|| ||openmpi3||3.1.4||gnu8/intel|| ||openmpi4||4.0.5||gnu8/gnu9/gnu12/intel|| ||mvapich2 ||2.2 ||gnu/gnu7 || ||mvapich2 || 2.3.2||gnu8/intel|| ||impi||2021.4||gnu/gnu8/intel|| ||opencoarrays ||1.8.11 || || ||opencoarrays||2.3.1||gnu7 openmpi3|| ||opencoarrays||2.8.0||gnu8 openmpi3|| === Interactive tests === You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: {{{salloc -p rome -N 2 -c 2}}}. To get an interactive shell on the allocated machines, use the command {{{srun --pty bash}}}. ==== OpenMPI ==== To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this: {{{ pax8a slots=8 pax8b slots=8 pax8c slots=8 pax8d slots=8 }}} The command line would look like this: {{{ /opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile ./program }}} More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/ ==== Mvapich2 ==== To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS. The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88: {{{ pax88 pax89 pax88 pax89 }}} The preferred way to run a application with mvapich2 is mpiexec, e.g.: {{{ /opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1 }}} ==== Intel MPI ==== To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this: {{{ export FI_PROVIDER=verbs }}} In a Slurm job, please use the prun wrapper to start your application. == Batch System Access == /!\ '''ATTENTION''': The PAX is now based on the SLURM scheduling system. === Slurm Commands === The most important commands: ||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster || ||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list || ||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution || ||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job || ||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands || ||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job || ||[[https://slurm.schedmd.com/sview.html|sview]]||Graphical user interface to view and modify Slurm state|| ||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information || === Allocation === Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun. === Parallel Execution === Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first. === MPI Support === Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}. === Job scripts === Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are: ||-J ||job name || ||--get-user-env ||copy environment variables || ||-n ||number of cores || ||-N ||number of nodes || ||-t ||run time of the job, default is 30 minutes || ||-A ||account, default the same as UNIX group || ||-p ||partition of the cluster || ||--mail-type ||configure email notifications, e.g. use --mail-type=ALL || Be careful with {{{--get-user-env}}}, it will also copy loaded modules to the job. ==== Time format ==== The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours. ==== Examples ==== An example job script is in [[attachment:slurm-mpi.job]] === Accounting === The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option. === Local Disk Space === Each node has a local directory /scratch with up to 770GB of space. It is cleared automatically at the end of the job. === I/O nodes === Most of the pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines in the pax11 partition are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: {{{ --constraint=10g*1}}}. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity. All pax12 machines have 10GB/s Ethernet as well. === Partitions and backfilling === The cluster consists of two partitions: rome has the faster machines and is the default, broadwell has the older nodes. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes. ==== Running EL6 software using Singularity ==== It is possible to run software built on EL6 in a [[Apptainer]] container. This works with mvapich2 binaries by calling Apptainer in the batch script like this: {{{ mpiexec apptainer exec /project/singularity/images/SL6.img yourbinary }}} However, Mvapich2 2.2 isn't optimized yet for Apptainer, so this is slower than running native programs. For Openmpi, Singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 Singularity container: {{{ singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6 }}} and in the job script: {{{ module add gnu7 openmpi3 prun prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6 }}} == Additional Software == The software installation is based on the [[http://openhpc.community|OpenHPC project]]. We provide only a subset of the available software. If you need any of the other [[https://github.com/openhpc/ohpc/wiki/Component-List-v1.3.9|available components]], send a request to zn-cluster@desy.de == AFS Access == The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory. == Monitoring == Ganglia provides a web monitoring interface. These page is only available from the internal network. [[http://ganglia.zeuthen.desy.de/ganglia/?c=Slurm%20PAX%20farm&m=load_one&r=hour&s=descending&hc=4&mc=2|Parallel Batch Machines]] == Known Issues == 1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead. 1. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only). 1. The command {{{sbcast}}} cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target. 1. The {{{module}}} command might be unavailable for tcsh login shell users. As workaround, they can run {{{bash -l}}} and use the {{{--get-user-env}}} option in the job. 1. There are some compatibility problems between third-party module files (e.g. Intel 2021) and the module command. 1. In the pax apptainer image, `squeue` shows all users as ''nobody''. To work around, run `apptainer run -B /etc/passwd /project/apptainer/images/pax.img` == Further documentation == [[http://www-zeuthen.desy.de/technisches_seminar/texte/waschk_20100427.pdf|Paralleles Rechnen in Zeuthen - die neuen Cluster]] , 04/27/10, technical seminar [[http://www-zeuthen.desy.de/technisches_seminar/texte/Technisches_Seminar_Waschk.pdf|HPC-Clusters at DESY Zeuthen]] , 11/22/06, technical seminar