2116
Comment: Änderungen für SL5
|
2240
OS-Limit
|
Deletions are marked like this. | Additions are marked like this. |
Line 20: | Line 20: |
Applications for the cluster must be compiled on a 64 bit SL5 machine, at the moment, this is sl5-64.ifh.de only. | Applications for the plejade cluster must be compiled on a 64 bit SL5 machine, at the moment, this is sl5-64.ifh.de only. |
Line 23: | Line 23: |
Applications for the cluster must be compiled on a 32 bit SL5 machine, at the moment, this is sl5.ifh.de only. | Applications for the geminide cluster must be compiled on a 32 bit SL5 machine, at the moment, this is sl5.ifh.de only. |
Line 35: | Line 35: |
Be sure to call the right mpirun version for your compiler. If you application was compiled with GCC, use /opt/openmpi/gcc/bin/mpirun -np $NSLOTS yourapp |
|
Line 50: | Line 45: |
Since SL5 is not the default OS on the batch farm yet, you must add this limit as well: #$ -l os=sl5 Be sure to call the right mpirun version for your compiler. If you application was compiled with GCC, use /opt/openmpi/gcc/bin/mpirun -np $NSLOTS yourapp |
Usage of the Linux Clusters at DESY Zeuthen
At Zeuthen, two clusters are available, one with 16 dual Opteron machines connected by Infiniband and one with 8 dual Xeons and Myrinet. They are integrated into the SGE batch system. The documentation in ["Batch System Usage"] applies to them.
Building Applications
Since the upgrade to SL5, both clusters use the openmpi implementation of the MPI standard.
There are MPI versions for the GCC, Intel and PGI compilers installed:
/opt/openmpi/gcc/bin/mpicc
/opt/openmpi/intel/bin/mpicc
/opt/openmpi/pgi/bin/mpicc
Compilers for C++ and Fortran are available as well.
Infiniband
Applications for the plejade cluster must be compiled on a 64 bit SL5 machine, at the moment, this is sl5-64.ifh.de only.
Myrinet
Applications for the geminide cluster must be compiled on a 32 bit SL5 machine, at the moment, this is sl5.ifh.de only.
Batch System Access
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for the Infiniband cluster:
#$ -pe mpich-ppn2 4
On the Myrinet cluster, it is similar:
#$ -pe mpichgm-ppn2 4
It is important to request the right limit for memory with the parameter h_vmem.
The Opteron machines have 3.3G of RAM and by default two jobs are executed on one node, so the maximal amount of memory is 1650M per process:
#$ -l h_vmem=1650M
The Xeons have 922.5M of RAM.
If your application is using threads, it is recommended to set the value of h_stack (by default the same as h_vmem) to a sane value, e.g. 10M.
Since SL5 is not the default OS on the batch farm yet, you must add this limit as well:
#$ -l os=sl5
Be sure to call the right mpirun version for your compiler. If you application was compiled with GCC, use
/opt/openmpi/gcc/bin/mpirun -np $NSLOTS yourapp
AFS Access
The application binary must be available to all nodes, that's why it should be placed in an AFS directory.
Further documentation
[http://www-zeuthen.desy.de/technisches_seminar/texte/Technisches_Seminar_Waschk.pdf HPC-Clusters at DESY Zeuthen] , 11/22/06, technical seminar