Differences between revisions 1 and 14 (spanning 13 versions)
Revision 1 as of 2014-04-28 15:47:48
Size: 2613
Editor: GötzWaschk
Comment:
Revision 14 as of 2017-04-20 15:54:04
Size: 3232
Editor: GötzWaschk
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
Slurm is currently being tested as scheduler for the pax8 and pax9 blade chassis, containing the machines pax80 to pax9f. At the moment, interactive logins to these machines are possible, this will probably change in a production setup. Currently, you can run all slurm commands on the pax8 machines.
Slurm is currently being tested as scheduler for the pax11 machines, named pax11-[00-31].
Line 7: Line 6:
<!> You need to acquire an addressless Kerberos ticket for Slurm to work. This isn't the default on supported DESY machines, you'll have to call {{{kinit -A}}}. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}}.

Slurm was configured to alwa
ys schedule complete nodes to each job.
<!> You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only).
Line 17: Line 14:
||[[http://slurm.schedmd.com/salloc.html|salloc]]||Reserve ressources for interactive commands||
Line 20: Line 18:
===== Allocation =====
Slurm was configured to always schedule complete nodes to each job. The pax11 machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or slurm.


Line 21: Line 24:
Slurm has integrated execution support for parallel programs, there is no need to use mpirun or mpiexec. To start a program in a job script or interactive session, use a command like {{{srun -n 4 -N 2 hostname}}}. This will execute the command {{{hostname}}} 4 times on 2 different machines. Slurm has integrated execution support for parallel programs, replacing mpirun. However, it depends on the used MPI library if you can use slurm's srun command or mpirun.
To start a program based on mvapich2, run it with a command like {{{srun --mpi=pmi2 -n 4 -N 2 program}}} for 4 processes on two nodes. For openmpi, use mpirun instead.
Line 23: Line 27:
srun can execute MPI programs, but the LD_LIBRARY_PATH must first be set, this is done by loading the right environment module, e.g. {{{module add openmpi-x86_64}}}. For openmpi, the command line option --resv-ports is needed for srun. Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}.
Line 34: Line 38:
||--switches||maximum number of switches connecting the allocated nodes|| ||--mail-type||configure email notifications, e.g. use --mail-type=ALL||
Line 38: Line 42:

===== Examples =====
An example job script is in [[attachment:slurm-mpi.job]]

==== Accounting ====
The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option.

Slurm Installation for Pax Cluster

Slurm is currently being tested as scheduler for the pax11 machines, named pax11-[00-31].

Kerberos Integration

<!> You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).

Slurm Commands

The most important commands:

sinfo

Information about the cluster

squeue

Show current job list

srun

Parallel command execution

sbatch

Submit a batch job

salloc

Reserve ressources for interactive commands

scancel

Abort a job

sacct

Show accounting information

Allocation

Slurm was configured to always schedule complete nodes to each job. The pax11 machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or slurm.

Parallel Execution

Slurm has integrated execution support for parallel programs, replacing mpirun. However, it depends on the used MPI library if you can use slurm's srun command or mpirun. To start a program based on mvapich2, run it with a command like srun --mpi=pmi2 -n 4 -N 2 program for 4 processes on two nodes. For openmpi, use mpirun instead.

MPI Support

Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. module add intel openmpi.

Job scripts

Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:

-J

job name

--get-user-env

copy environment variables

-n

number of cores

-N

number of nodes

-t

run time of the job, default is 30 minutes

-A

account, default the same as UNIX group

-p

partition of the cluster

--mail-type

configure email notifications, e.g. use --mail-type=ALL

Time format

The runtime of a job is given as minutes, hours and minutes (HH:MM) or days and hours (DD-HH). The maximum run time was set to 48 hours.

Examples

An example job script is in slurm-mpi.job

Accounting

The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.

Slurm_Installation_for_Pax_Cluster (last edited 2017-12-04 13:49:07 by GötzWaschk)