4040
Comment:
|
11571
readd documentation for pax11
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
/!\ '''This web page will no longer be updated.''' Please use this link for [[https://dv-zeuthen.desy.de/services/parallel_computing/|current information]]. ---- <<BR>> |
|
Line 2: | Line 5: |
<<TableOfContents>> | |
Line 3: | Line 7: |
There are 4 dedicated parallel clusters (blade centers) in testing mode, but you can also run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there. | == Introduction == There are two dedicated parallel clusters available for running parallel applications, but you can also run parallel MPI jobs in the HTCondor farm. The documentation in [[https://dv-zeuthen.desy.de/services/batch/|Batch System Usage]] applies there. For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <<MailTo(zn-cluster AT desy DOT de)>>. To get subscribed to that list, send an email to <<MailTo(sympa AT desy DOT de)>> with the subject: '''subscribe zn-cluster''' == Hardware == The batch system consists of two partition: pax12 (rome) has 16 nodes and HDR Infiniband. pax11 (broadwell) consists of 30 compute nodes, connected via a FDR Infiniband network. === Nodes === The AMD machines (pax12) have one socket, the Intel machines (pax11) have two. ||Name||CPU||Code Name||Cores||Memory|| ||pax12-[00-15]||AMD EPYC 7702P 64-Core Processor @ 2GHz||Rome||64||256G|| ||pax11-[00-31]||Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz||Broadwell||16||128G|| == Software Environment == The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the {{{module}}} command. To build on any machine in the right environment, run the {{{/project/apptainer/images/pax.img}}} image. You can submit your jobs if you run the apptainer container on a EL7 WGS like this: {{{ apptainer run -B /etc/passwd /project/apptainer/images/pax.img }}} |
Line 6: | Line 27: |
Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.: {{{ module add gnu mvapich2 }}} OpenHPC provides the {{{module}}} command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for {{{openmpi}}} you'll have to load the {{{intel}}} module first. ||module name ||version ||depends on || ||gnu ||5.4.0 || || ||gnu7||7.3.0 || || ||gnu8||8.3.0 || || ||gnu9||9.3.0 || || ||gnu12||12.2.0|| || ||intel ||2021.4|| || ||hdf5||1.10.1||gnu|| ||openmpi ||1.10.7 ||gnu/intel || ||openmpi3||3.1.0||gnu7|| ||openmpi3||3.1.4||gnu8/intel|| ||openmpi4||4.0.5||gnu8/gnu9/gnu12/intel|| ||mvapich2 ||2.2 ||gnu/gnu7 || ||mvapich2 || 2.3.2||gnu8/intel|| ||impi||2021.4||gnu/gnu8/intel|| ||opencoarrays ||1.8.11 || || ||opencoarrays||2.3.1||gnu7 openmpi3|| ||opencoarrays||2.8.0||gnu8 openmpi3|| |
|
Line 7: | Line 51: |
=== Openmpi === Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.4 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.3.2-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.3.2-gcc/bin . |
=== Interactive tests === You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: {{{salloc -p rome -N 2 -c 2}}}. To get an interactive shell on the allocated machines, use the command {{{srun --pty bash}}}. |
Line 11: | Line 54: |
Additional openmpi versions are installed to support the Intel and PGI compilers: | ==== OpenMPI ==== To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this: |
Line 13: | Line 58: |
/usr/lib64/openmpi-1.3.2-intel/bin /usr/lib64/openmpi-1.3.2-pgi/bin |
pax8a slots=8 pax8b slots=8 pax8c slots=8 pax8d slots=8 }}} The command line would look like this: {{{ /opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile ./program }}} More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/ ==== Mvapich2 ==== To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS. The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88: {{{ pax88 pax89 pax88 pax89 }}} The preferred way to run a application with mvapich2 is mpiexec, e.g.: {{{ /opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1 |
Line 17: | Line 86: |
If you don't want to specify the full path to your preferred MPI implementation, configure a default by using the ini command or running mpi-selector-menu on a build machine. | ==== Intel MPI ==== To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this: {{{ export FI_PROVIDER=verbs }}} In a Slurm job, please use the prun wrapper to start your application. |
Line 19: | Line 93: |
==== Building applications ==== 64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de. |
== Batch System Access == /!\ '''ATTENTION''': The PAX is now based on the SLURM scheduling system. === Slurm Commands === The most important commands: ||[[http://slurm.schedmd.com/sinfo.html|sinfo]] ||Information about the cluster || ||[[http://slurm.schedmd.com/squeue.html|squeue]] ||Show current job list || ||[[http://slurm.schedmd.com/srun.html|srun]] ||Parallel command execution || ||[[http://slurm.schedmd.com/sbatch.html|sbatch]] ||Submit a batch job || ||[[http://slurm.schedmd.com/salloc.html|salloc]] ||Reserve ressources for interactive commands || ||[[http://slurm.schedmd.com/scancel.html|scancel]] ||Abort a job || ||[[https://slurm.schedmd.com/sview.html|sview]]||Graphical user interface to view and modify Slurm state|| ||[[http://slurm.schedmd.com/sacct.html|sacct]] ||Show accounting information || |
Line 22: | Line 106: |
==== Running your application ==== To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this: {{{ pax0c slots=8 pax0d slots=8 pax0e slots=8 pax0f slots=8 }}} The command line would look like this: {{{ /usr/lib64/openmpi/1.3.2-gcc/bin/mpirun -np 32 -machinefile ./machinefile --mca btl "^udapl" ./program }}} |
=== Allocation === Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option {{{-c 2}}} for sbatch, salloc or srun. === Parallel Execution === Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first. |
Line 35: | Line 111: |
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/ === Mvapich / Mvapich2 === Three additional mpi implementations are installed on all pax machines: {{{ /usr/lib64/mvapich/1.1.0-gcc/bin /usr/lib64/mvapich2/1.2-gcc/bin /usr/lib64/mvapich2/1.2-intel/bin }}} To use mvapich, add one of those versions to your path, compile your application with that mpi compiler and run it as specified here: http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.html#x1-160005.2 |
=== MPI Support === Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. {{{module add intel openmpi}}}. |
Line 46: | Line 114: |
The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax19 and pax18: {{{ pax18 pax19 pax18 pax19 }}} == Batch System Access == === Old Farm === |
=== Job scripts === Parameters to slurm can be set on the sbatch command line or starting with a {{{#SBATCH}}} in the script. The most important parameters are: ||-J ||job name || ||--get-user-env ||copy environment variables || ||-n ||number of cores || ||-N ||number of nodes || ||-t ||run time of the job, default is 30 minutes || ||-A ||account, default the same as UNIX group || ||-p ||partition of the cluster || ||--mail-type ||configure email notifications, e.g. use --mail-type=ALL || |
Line 56: | Line 125: |
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots on a single node: | Be careful with {{{--get-user-env}}}, it will also copy loaded modules to the job. |
Line 58: | Line 127: |
#$ -pe multicore-mpi 8 | ==== Time format ==== The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours. |
Line 60: | Line 130: |
For more MPI processes, use -pe mpi. | ==== Examples ==== An example job script is in [[attachment:slurm-mpi.job]] === Accounting === The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command {{{sacct}}}. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command {{{sacct -S 2014-05-01}}} . To view jobs from other accounts as well, use the {{{--allusers}}} option. |
Line 63: | Line 137: |
Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use | === Local Disk Space === Each node has a local directory /scratch with up to 770GB of space. It is cleared automatically at the end of the job. |
Line 65: | Line 140: |
/usr/lib64/openmpi/1.3.2-gcc/bin/mpirun --mca btl "^udapl" -np $NSLOTS yourapp | === I/O nodes === Most of the pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines in the pax11 partition are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: {{{ --constraint=10g*1}}}. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity. All pax12 machines have 10GB/s Ethernet as well. |
Line 67: | Line 143: |
The mca option is needed to disable the udapl btl plugin that currently does not work. | === Partitions and backfilling === The cluster consists of two partitions: rome has the faster machines and is the default, broadwell has the older nodes. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes. |
Line 69: | Line 146: |
=== New Test Farm === The new farm uses SGE 6.2. You have to initialize it first: ini sge62 Then you can select one of the pax blade centers like this in your job script. You can request up to 128 slots: #$ -pe pax? 128 /usr/lib64/openmpi/1.3.2-gcc/bin/mpirun --mca btl "^udapl" -np $NSLOTS yourapp |
==== Running EL6 software using Singularity ==== It is possible to run software built on EL6 in a [[Apptainer]] container. This works with mvapich2 binaries by calling Apptainer in the batch script like this: {{{ mpiexec apptainer exec /project/singularity/images/SL6.img yourbinary }}} However, Mvapich2 2.2 isn't optimized yet for Apptainer, so this is slower than running native programs. |
Line 81: | Line 154: |
If you want to use mvapich2 instead of openmpi from a batch job, you must first create the file ~/.mpd.conf that contains of one line like this: | For Openmpi, Singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 Singularity container: {{{ singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6 }}} and in the job script: {{{ module add gnu7 openmpi3 prun prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6 }}} == Additional Software == The software installation is based on the [[http://openhpc.community|OpenHPC project]]. We provide only a subset of the available software. If you need any of the other [[https://github.com/openhpc/ohpc/wiki/Component-List-v1.3.9|available components]], send a request to zn-cluster@desy.de |
Line 83: | Line 166: |
MPD_SECRETWORD=password | == AFS Access == The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory. |
Line 85: | Line 169: |
Then use this in your job script: {{{ #$ -pe pax2-mvapich2 16 export MPD_CON_EXT="sge_$JOB_ID.$SGE_TASK_ID" /usr/lib64/mvapich2/1.2-gcc/bin/mpiexec -n $NSLOTS your_program }}} == AFS Access == |
== Monitoring == Ganglia provides a web monitoring interface. These page is only available from the internal network. |
Line 93: | Line 172: |
The application binary must be available to all nodes, that's why it should be placed in an AFS directory. | [[http://ganglia.zeuthen.desy.de/ganglia/?c=Slurm%20PAX%20farm&m=load_one&r=hour&s=descending&hc=4&mc=2|Parallel Batch Machines]] |
Line 95: | Line 174: |
== BLAS library == Both ATLAS ans Goto``BLAS are available. * ATLAS is in /opt/products/atlas * libgoto is in /usr/lib or /usr/lib64 respectively. |
== Known Issues == 1. Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead. 1. You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set {{{noaddresses=true}}} in the file {{{/etc/krb5.conf}}}. To check if your ticket is addressless, call {{{klist -v}}} (Heimdal klist only). 1. The command {{{sbcast}}} cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target. 1. The {{{module}}} command might be unavailable for tcsh login shell users. As workaround, they can run {{{bash -l}}} and use the {{{--get-user-env}}} option in the job. 1. There are some compatibility problems between third-party module files (e.g. Intel 2021) and the module command. 1. In the pax apptainer image, `squeue` shows all users as ''nobody''. To work around, run `apptainer run -B /etc/passwd /project/apptainer/images/pax.img` |
Line 103: | Line 182: |
[[http://www-zeuthen.desy.de/technisches_seminar/texte/waschk_20100427.pdf|Paralleles Rechnen in Zeuthen - die neuen Cluster]] , 04/27/10, technical seminar |
This web page will no longer be updated. Please use this link for current information.
Usage of the Linux Clusters at DESY Zeuthen
Contents
Introduction
There are two dedicated parallel clusters available for running parallel applications, but you can also run parallel MPI jobs in the HTCondor farm. The documentation in Batch System Usage applies there.
For discussions and information regarding the usage of the PAX cluster a mailing list has been introduced: <zn-cluster AT desy DOT de>. To get subscribed to that list, send an email to <sympa AT desy DOT de> with the subject: subscribe zn-cluster
Hardware
The batch system consists of two partition: pax12 (rome) has 16 nodes and HDR Infiniband. pax11 (broadwell) consists of 30 compute nodes, connected via a FDR Infiniband network.
Nodes
The AMD machines (pax12) have one socket, the Intel machines (pax11) have two.
Name |
CPU |
Code Name |
Cores |
Memory |
pax12-[00-15] |
AMD EPYC 7702P 64-Core Processor @ 2GHz |
Rome |
64 |
256G |
pax11-[00-31] |
Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz |
Broadwell |
16 |
128G |
Software Environment
The pax machines have a software environment that is slightly different from the normal installation, it includes the OpenHPC software stack and a different version of the module command. To build on any machine in the right environment, run the /project/apptainer/images/pax.img image. You can submit your jobs if you run the apptainer container on a EL7 WGS like this:
apptainer run -B /etc/passwd /project/apptainer/images/pax.img
Building Applications
Use the 'module' command to first add a compiler implementation and then a version of MPI to your path e.g.:
module add gnu mvapich2
OpenHPC provides the module command from the lmod project. It supports more features then the old environment-modules, including dependent modules, that are shown only after loading the prequisites, e.g. for openmpi you'll have to load the intel module first.
module name |
version |
depends on |
gnu |
5.4.0 |
|
gnu7 |
7.3.0 |
|
gnu8 |
8.3.0 |
|
gnu9 |
9.3.0 |
|
gnu12 |
12.2.0 |
|
intel |
2021.4 |
|
hdf5 |
1.10.1 |
gnu |
openmpi |
1.10.7 |
gnu/intel |
openmpi3 |
3.1.0 |
gnu7 |
openmpi3 |
3.1.4 |
gnu8/intel |
openmpi4 |
4.0.5 |
gnu8/gnu9/gnu12/intel |
mvapich2 |
2.2 |
gnu/gnu7 |
mvapich2 |
2.3.2 |
gnu8/intel |
impi |
2021.4 |
gnu/gnu8/intel |
opencoarrays |
1.8.11 |
|
opencoarrays |
2.3.1 |
gnu7 openmpi3 |
opencoarrays |
2.8.0 |
gnu8 openmpi3 |
Interactive tests
You can run interactive jobs in Slurm after allocating nodes with salloc, e.g.: salloc -p rome -N 2 -c 2. To get an interactive shell on the allocated machines, use the command srun --pty bash.
OpenMPI
To run an MPI program outside the batch system, you must specify a machinefile listing all the machines and the number of cores your application should run on. A typical machine file looks like this:
pax8a slots=8 pax8b slots=8 pax8c slots=8 pax8d slots=8
The command line would look like this:
/opt/ohpc/pub/mpi/openmpi-gnu/1.10.7/bin/mpirun -np 32 -machinefile ./machinefile ./program
More information on openmpi is in the openmpi FAQ: http://www.open-mpi.org/faq/
Mvapich2
To use mvapich2, add one of those versions to your path and compile your application with that mpi compiler. Applications built with mvapich2 can use only Infiniband network hardware, so they will work on the pax machines, but not on more than one farm machine or WGS.
The machine file format is different from the one for openmpi, you must list the host name for every core you want to use, e.g. if you want to run four processes, two processes on each of pax89 and pax88:
pax88 pax89 pax88 pax89
The preferred way to run a application with mvapich2 is mpiexec, e.g.:
/opt/ohpc/pub/mpi/mvapich2-intel/2.2/bin/mpiexec -n 4 -machinefile ./machinefile /opt/ohpc/pub/libs/intel/mvapich2/imb/2018.1/bin/IMB-MPI1
Intel MPI
To use Intel MPI, add a compiler module followed by impi. Use the compiler wrappers like 'mpicc' and 'mpif90' for GNU or 'mpiicc' and 'mpiifort' for the Intel compiler. To run the resulting application, set the environment variable like this:
export FI_PROVIDER=verbs
In a Slurm job, please use the prun wrapper to start your application.
Batch System Access
ATTENTION: The PAX is now based on the SLURM scheduling system.
Slurm Commands
The most important commands:
Information about the cluster |
|
Show current job list |
|
Parallel command execution |
|
Submit a batch job |
|
Reserve ressources for interactive commands |
|
Abort a job |
|
Graphical user interface to view and modify Slurm state |
|
Show accounting information |
Allocation
Slurm was configured to always schedule complete nodes to each job. The pax machines have hyperthreading enabled, each hardware thread is seen as a CPU core by Slurm, so by default, on a 32 core machine with hyperthreading, 64 MPI processes are assigned. To prevent that, use the option -c 2 for sbatch, salloc or srun.
Parallel Execution
Slurm has integrated execution support for parallel programs, replacing mpirun. To work around slight differences in needed options, use prun instead of srun for starting MPI application. You'll have to load the prun module first.
MPI Support
Before running MPI programs, the LD_LIBRARY_PATH variable must first be set, this is done by loading the right environment module, e.g. module add intel openmpi.
Job scripts
Parameters to slurm can be set on the sbatch command line or starting with a #SBATCH in the script. The most important parameters are:
-J |
job name |
--get-user-env |
copy environment variables |
-n |
number of cores |
-N |
number of nodes |
-t |
run time of the job, default is 30 minutes |
-A |
account, default the same as UNIX group |
-p |
partition of the cluster |
--mail-type |
configure email notifications, e.g. use --mail-type=ALL |
Be careful with --get-user-env, it will also copy loaded modules to the job.
Time format
The runtime of a job is given as minutes, hours, minutes and seconds (HH:MM:SS) or days and hours (DD-HH). The maximum run time was set to 48 hours.
Examples
An example job script is in slurm-mpi.job
Accounting
The jobs and their resources usage is stored in a database that is used for the fair share part of the scheduler. You can view your account's jobs with the command sacct. With no parameters,only today's jobs are shown, to view all jobs since May 1st, use the command sacct -S 2014-05-01 . To view jobs from other accounts as well, use the --allusers option.
Local Disk Space
Each node has a local directory /scratch with up to 770GB of space. It is cleared automatically at the end of the job.
I/O nodes
Most of the pax11 machines have external 1GB/s Ethernet connections to the storage. To allow faster storage access, four machines in the pax11 partition are equipped with 10GB/s Ethernet instead. To access them, you'll have to request the 10g feature in Slurm: --constraint=10g*1. That way, the first process, the one executing the job scripts, will run on one of the machines with faster connectivity. All pax12 machines have 10GB/s Ethernet as well.
Partitions and backfilling
The cluster consists of two partitions: rome has the faster machines and is the default, broadwell has the older nodes. Jobs can run on only one type of node. The special partition backfill is used for filling up otherwise empty nodes. Jobs running there are automatically terminated by slurm if another job on the main partition needs the nodes.
Running EL6 software using Singularity
It is possible to run software built on EL6 in a Apptainer container. This works with mvapich2 binaries by calling Apptainer in the batch script like this:
mpiexec apptainer exec /project/singularity/images/SL6.img yourbinary
However, Mvapich2 2.2 isn't optimized yet for Apptainer, so this is slower than running native programs.
For Openmpi, Singularity is supported in Openmpi >= 2.1, that's why you'll have to rebuild your program with openmpi3 as installed in the SL6 Singularity container:
singularity exec /project/singularity/images/SL6.img /usr/lib64/openmpi-3.0/bin/mpicc yourprog.c -o yourprog.sl6
and in the job script:
module add gnu7 openmpi3 prun prun singularity exec -B /scratch /project/singularity/images/SL6.img yourprog.sl6
Additional Software
The software installation is based on the OpenHPC project. We provide only a subset of the available software. If you need any of the other available components, send a request to zn-cluster@desy.de
AFS Access
The application binary must be available to all nodes, that's why it should be placed in an AFS or Lustre directory.
Monitoring
Ganglia provides a web monitoring interface. These page is only available from the internal network.
Known Issues
Openmpi3 has a bug that makes the program hang in certain situations: https://www.mail-archive.com/users@lists.open-mpi.org//msg31839.html Use openmpi instead.
You need to acquire an addressless Kerberos ticket for Slurm to work. This is the default on supported DESY machines. On self-maintained machines like notebooks, simply set noaddresses=true in the file /etc/krb5.conf. To check if your ticket is addressless, call klist -v (Heimdal klist only).
The command sbcast cannot be used to copy a file to /scratch, as that is a bind mounted directory. Use /batch/job.${SLURM_JOB_ID}.0/scratch as target.
The module command might be unavailable for tcsh login shell users. As workaround, they can run bash -l and use the --get-user-env option in the job.
- There are some compatibility problems between third-party module files (e.g. Intel 2021) and the module command.
In the pax apptainer image, squeue shows all users as nobody. To work around, run apptainer run -B /etc/passwd /project/apptainer/images/pax.img
Further documentation
Paralleles Rechnen in Zeuthen - die neuen Cluster , 04/27/10, technical seminar
HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar