3079
Comment:
|
4829
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#acl DvGroup:read,write,delete,revert,admin All:read | #acl DvGroup:read,write,delete,revert,admin All:read |
Line 3: | Line 3: |
Slurm is currently being tested as scheduler for the pax8 and pax9 blade chassis, containing the machines pax80 to pax9f. At the moment, interactive logins to these machines are possible, this will probably change in a production setup. Currently, you can run all slurm commands on the pax8 machines. | Slurm is currently being tested as scheduler for the pax11 machines, named pax11-[00-31]. The client software is currently available on the machine ''sl7''. |
Line 6: | Line 8: |
<!> You need to acquire an addressless Kerberos ticket for Slurm to work. This isn't the default on supported DESY machines, you'll have to call {{{kinit -A}}}. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}}. Slurm was configured to always schedule complete nodes to each job. |
<!> You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only). |
Line 13: | Line 12: |
||[[http://slurm.schedmd.com/sinfo.html|sinfo]]||Information about the cluster|| ||[[http://slurm.schedmd.com/squeue.html|squeue]]||Show current job list || ||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution|| ||[[http://slurm.schedmd.com/sbatch.html|sbatch]]||Submit a batch job|| ||[[http://slurm.schedmd.com/scancel.html|scancel]]||Abort a job|| ||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information|| |
||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster || ||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list || ||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution || ||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job || ||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands || ||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job || ||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information || ===== Allocation ===== Slurm was configured to always schedule complete nodes to each job. The pax11 machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun. |
Line 21: | Line 27: |
Slurm has integrated execution support for parallel programs, there is no need to use mpirun or mpiexec. To start a program in a job script or interactive session, use a command like {{{srun -n 4 -N 2 hostname}}}. This will execute the command {{{hostname}}} 4 times on 2 different machines. | Slurm has integrated execution support for parallel programs, replacing mpirun. However, it depends on the used MPI library if you can use slurm's srun command or mpirun. To start a program based on mvapich2, run it with a command like {{{srun --mpi=pmi2 -n 4 -N 2 program}}} for 4 processes on two nodes. For openmpi, use mpirun instead. |
Line 23: | Line 30: |
srun can execute MPI programs, but the LD_LIBRARY_PATH must first be set, this is done by loading the right environment module, e.g. {{{module add openmpi-x86_64}}}. For openmpi, the command line option --resv-ports is needed for srun. | Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}. |
Line 25: | Line 33: |
Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are: ||-J ||job name|| ||--get-user-env||copy environment variables|| ||-n ||number of cores|| ||-N ||number of nodes|| ||-t ||run time of the job, default is 30 minutes|| ||-A ||account, default the same as UNIX group|| ||-p ||partition of the cluster|| ||--switches||maximum number of switches connecting the allocated nodes|| |
Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are: ||-J ||job name || ||--get-user-env ||copy environment variables || ||-n ||number of cores || ||-N ||number of nodes || ||-t ||run time of the job, default is 30 minutes || ||-A ||account, default the same as UNIX group || ||-p ||partition of the cluster || ||--mail-type ||configure email notifications, e.g. use --mail-type=ALL || |
Line 37: | Line 47: |
The runtime of a job is given as minutes, hours and minutes (HH:MM) or days and hours (DD-HH). The maximum run time was set to 48 hours. | The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours. |
Line 39: | Line 49: |
===== Topology aware scheduling ===== The pax machines are connected by a fat tree of Infiniband switches, one switch in each blade center connects the included blade servers and one additional hierarchy level connects these switches. The Slurm scheduler is aware of this and you can request a maximum numbers of switches used in a job with the {{{--switches}}} option. {{{ --switches=1 }}} will ensure that all nodes are allocated in the same blade center. |
===== Examples ===== An example job script is in [[attachment:slurm-mpi.job]] ==== Accounting ==== The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option. ==== EL7 changes ==== The new system is based on EL7, binaries built for EL6 will not run, they must be recompiled and linked to the new MPI libraries. We install the MPI software built by the [[http://openhpc.community|OpenHPC]] project, this includes both mvapich2 and openmpi for the Intel and GNU compilers. If you need [[http://build.openhpc.community/OpenHPC:/1.3:/Factory/CentOS_7/src/|additional software provided by the OpenHPC project]] that wasn't installed yet, please request it. ===== Available Software ===== OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first. ||module name ||version ||depends on || ||gnu ||5.4.0 || || ||gnu7||7.2.0 || || ||intel ||18.0.0 || || ||openmpi ||1.10.6 ||gnu || ||openmpi ||1.10.7 ||intel || ||openmpi3||3.0.0||gnu7/intel|| ||mvapich2 ||2.2 ||gnu/gnu7/intel || ||opencoarrays ||1.8.5 || || ===== Local Disk Space ===== Each node has a local directory /scratch with 1TB of space. It is cleared automatically at the end of the job. ===== Known Issues ===== 1. Openmpi has a bug that makes the program crash with a bus error in certain situations: https://github.com/open-mpi/ompi/issues/3251 1. Mvapich2 complains about missing CMA support, set {{{MV2_SMP_USE_CMA=0}}} in the job script. |
Slurm Installation for Pax Cluster
Slurm is currently being tested as scheduler for the pax11 machines, named pax11-[00-31].
The client software is currently available on the machine sl7.
Kerberos Integration
You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).
Slurm Commands
The most important commands:
Information about the cluster |
|
Show current job list |
|
Parallel command execution |
|
Submit a batch job |
|
Reserve ressources for interactive commands |
|
Abort a job |
|
Show accounting information |
Allocation
Slurm was configured to always schedule complete nodes to each job. The pax11 machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or srun.
Parallel Execution
Slurm has integrated execution support for parallel programs, replacing mpirun. However, it depends on the used MPI library if you can use slurm's srun command or mpirun. To start a program based on mvapich2, run it with a command like srun --mpi=pmi2 -n 4 -N 2 program for 4 processes on two nodes. For openmpi, use mpirun instead.
MPI Support
Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. module add intel openmpi.
Job scripts
Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:
-J |
job name |
--get-user-env |
copy environment variables |
-n |
number of cores |
-N |
number of nodes |
-t |
run time of the job, default is 30 minutes |
-A |
account, default the same as UNIX group |
-p |
partition of the cluster |
--mail-type |
configure email notifications, e.g. use --mail-type=ALL |
Time format
The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.
Examples
An example job script is in slurm-mpi.job
Accounting
The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.
EL7 changes
The new system is based on EL7, binaries built for EL6 will not run, they must be recompiled and linked to the new MPI libraries. We install the MPI software built by the OpenHPC project, this includes both mvapich2 and openmpi for the Intel and GNU compilers. If you need additional software provided by the OpenHPC project that wasn't installed yet, please request it.
Available Software
OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.
module name |
version |
depends on |
gnu |
5.4.0 |
|
gnu7 |
7.2.0 |
|
intel |
18.0.0 |
|
openmpi |
1.10.6 |
gnu |
openmpi |
1.10.7 |
intel |
openmpi3 |
3.0.0 |
gnu7/intel |
mvapich2 |
2.2 |
gnu/gnu7/intel |
opencoarrays |
1.8.5 |
|
Local Disk Space
Each node has a local directory /scratch with 1TB of space. It is cleared automatically at the end of the job.
Known Issues
Openmpi has a bug that makes the program crash with a bus error in certain situations: https://github.com/open-mpi/ompi/issues/3251
Mvapich2 complains about missing CMA support, set MV2_SMP_USE_CMA=0 in the job script.