= Usage of the Linux Clusters at DESY Zeuthen = There are no dedicated parallel clusters available at the moment, but you can run parallel MPI jobs in the SGE farm. The documentation in [[Batch_System_Usage]] applies there. == Building Applications == Since SL5, all batch worker nodes have the openmpi implementation of the MPI standard installed. There are MPI versions for the GCC, Intel and PGI compilers installed: /opt/openmpi/gcc/bin/mpicc /opt/openmpi/intel/bin/mpicc /opt/openmpi/pgi/bin/mpicc Compilers for C++ and Fortran are available as well. === Building applications === MPI Applications can be compiled on any 64 bit SL5 machine, e.g. sl5-64.ifh.de. == Batch System Access == A job script designated for a parallel job needs to specify the parallel environment and the number of required CPUs. The parameter looks like this: #$ -pe multicore-mpi 4 Be sure to call the right mpirun version for your compiler. If you application was compiled with GCC, use /opt/openmpi/gcc/bin/mpirun -np $NSLOTS yourapp == AFS Access == The application binary must be available to all nodes, that's why it should be placed in an AFS directory. == BLAS library == Both ATLAS ans Goto``BLAS are available. * ATLAS is in /opt/products/atlas * libgoto is in /usr/lib or /usr/lib64 respectively. == Further documentation == [[http://www-zeuthen.desy.de/technisches_seminar/texte/Technisches_Seminar_Waschk.pdf|HPC-Clusters at DESY Zeuthen]] , 11/22/06, technical seminar