1536
Comment: Informationen über die alten Cluster entfernt
|
1646
|
Deletions are marked like this. | Additions are marked like this. |
Line 6: | Line 6: |
Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. | Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.3 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.2.7-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.2.7-gcc/bin . |
Line 8: | Line 9: |
There are MPI versions for the GCC, Intel and PGI compilers installed: /opt/openmpi/gcc/bin/mpicc /opt/openmpi/intel/bin/mpicc /opt/openmpi/pgi/bin/mpicc Compilers for C++ and Fortran are available as well. |
|
Line 20: | Line 12: |
MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de. | 64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de. |
Line 24: | Line 16: |
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this: | A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots on a single node: |
Line 26: | Line 18: |
#$ -pe multicore-mpi 4 | #$ -pe multicore-mpi 8 |
Line 28: | Line 20: |
Be sure to call the right mpirun version for your compiler. If you application was compiled with GCC, use | For more MPI processes, use -pe mpi. |
Line 30: | Line 22: |
/opt/openmpi/gcc/bin/mpirun -np $NSLOTS yourapp | Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use /usr/lib64/openmpi/1.2.7-gcc/bin/mpirun -np $NSLOTS yourapp |
Usage of the Linux Clusters at DESY Zeuthen
There are no dedicated parallel clusters available at the moment, but you can run parallel MPI jobs in the SGE farm. The documentation in Batch_System_Usage applies there.
Building Applications
Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. Recently the machines were upgraded to the default SL5.3 packages of openmpi. For 64 bit applications use the installation in /usr/lib64/openmpi/1.2.7-gcc/bin, for 32 bit use the binaries from /usr/lib/openmpi/1.2.7-gcc/bin .
Building applications
64 bit MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de.
Batch System Access
A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this for up to 8 slots on a single node:
#$ -pe multicore-mpi 8
For more MPI processes, use -pe mpi.
Be sure to call the right mpirun version for your architecture. If you application was compiled for 64 bit, use
/usr/lib64/openmpi/1.2.7-gcc/bin/mpirun -np $NSLOTS yourapp
AFS Access
The application binary must be available to all nodes, that's why it should be placed in an AFS directory.
BLAS library
Both ATLAS ans GotoBLAS are available.
- ATLAS is in /opt/products/atlas
- libgoto is in /usr/lib or /usr/lib64 respectively.
Further documentation
HPC-Clusters at DESY Zeuthen , 11/22/06, technical seminar