#acl DvGroup:read,write,delete,revert,admin All:read === Slurm Installation for Pax Cluster === Slurm is currently being tested as scheduler for the pax11 machines, named pax11-[00-31]. The client software is currently available on the machine ''sl7''. ==== Kerberos Integration ==== You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only). ==== Slurm Commands ==== The most important commands: ||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster || ||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list || ||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution || ||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job || ||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands || ||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job || ||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information || ===== Allocation ===== Slurm was configured to always schedule complete nodes to each job. The pax11 machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun. ===== Parallel Execution ===== Slurm has integrated execution support for parallel programs, replacing mpirun. However, it depends on the used MPI library if you can use slurm's srun command or mpirun. To start a program based on mvapich2, run it with a command like {{{srun --mpi=pmi2 -n 4 -N 2 program}}} for 4 processes on two nodes. For openmpi, use mpirun instead. ===== MPI Support ===== Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}. ==== Job scripts ==== Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are: ||-J ||job name || ||--get-user-env ||copy environment variables || ||-n ||number of cores || ||-N ||number of nodes || ||-t ||run time of the job, default is 30 minutes || ||-A ||account, default the same as UNIX group || ||-p ||partition of the cluster || ||--mail-type ||configure email notifications, e.g. use --mail-type=ALL || ===== Time format ===== The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours. ===== Examples ===== An example job script is in [[attachment:slurm-mpi.job]] ==== Accounting ==== The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option. ==== EL7 changes ==== The new system is based on EL7, binaries built for EL6 will not run, they must be recompiled and linked to the new MPI libraries. We install the MPI software built by the [[http://openhpc.community|OpenHPC]] project, this includes both mvapich2 and openmpi for the Intel and GNU compilers. If you need [[http://build.openhpc.community/OpenHPC:/1.3:/Factory/CentOS_7/src/|additional software provided by the OpenHPC project]] that wasn't installed yet, please request it. ==== Running EL6 software ==== It is possible to run software built on EL6 in a [[Singularity]] container. This works with mvapich2 binaries by calling singularity in the batch script like this: {{{ mpiexec singularity exec /project/singularity/images/SL6.img yourbinary }}} However, Mvapich2 2.2 isn't optimized yet for Singularity, so this is slower than running native programs. For Openmpi, singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 singularity container: {{{ singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6 }}} and in the job script: {{{ module add gnu7 openmpi3 prun prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6 }}} ===== Available Software ===== OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first. ||module name ||version ||depends on || ||gnu ||5.4.0 || || ||gnu7||7.2.0 || || ||intel ||18.0.0 || || ||openmpi ||1.10.6 ||gnu || ||openmpi ||1.10.7 ||intel || ||openmpi3||3.0.0||gnu7/intel|| ||mvapich2 ||2.2 ||gnu/gnu7/intel || ||opencoarrays ||1.8.5 || || ===== Local Disk Space ===== Each node has a local directory /scratch with 1TB of space. It is cleared automatically at the end of the job. ===== Known Issues ===== 1. Openmpi3 has a bug that makes the program hang in certain situations: https://github.com/open-mpi/ompi/issues/3251