2204
Comment:
|
2791
|
Deletions are marked like this. | Additions are marked like this. |
Line 10: | Line 10: |
* Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2 |
* Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2 |
Line 21: | Line 21: |
{{{#!c ini ic openmpi_intel |
{{{#!c ini ic openmpi_intel }}} Initialize ICT environment: {{{#!c source /opt/intel/itac/7.2.2.006/bin/itacvars.sh source /opt/intel/impi/3.2.2.006/bin/mpivars.sh source /opt/products/idb/11.0/bin/ia32/idbvars.sh export I_MPI_CC=/opt/products/bin/icc }}} == Compiling MPI applications with Intel MPI == Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile. == MPD daemons == The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this: {{{#!c pax00 pax01 pax02 pax03 |
Line 25: | Line 50: |
Initialize ICT environment: {{{#!c source /opt/intel/ict/3.2.2.013/ictvars.sh source /opt/products/idb/11.0/bin/intel64/idbvars.sh source /opt/products/bin/ifortvars.sh intel64 source /opt/products/bin/iccvars.sh intel64 export IDB_HOME=/opt/products/bin/ }}} == Compiling MPI applications with Intel MPI == Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile. == MPD daemons == TODO: machines.LINUX file TODO: mpd.hosts Start MPD daemons on all hosts in mpd.hosts file: |
The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way: |
Line 44: | Line 53: |
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts | # mpdboot -n 4 --rsh=ssh -f ./mpd.hosts |
Line 46: | Line 55: |
Line 49: | Line 57: |
Line 58: | Line 67: |
After you are done with running and analysing your application shut down the MPD daemons with the command: | |
Line 59: | Line 69: |
After you are done with running and analysing your application shut down the MPD daemons with the command: | |
Line 65: | Line 74: |
Line 67: | Line 77: |
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong | # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong |
Line 69: | Line 79: |
Line 72: | Line 81: |
Line 75: | Line 85: |
<<AttachList>> |
Contents
1. Overview
The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:
- Intel C++ Compiler 11.1
- Intel Debugger 11.1
- Intel MPI Library 3.2
- Intel MPI Benchmarks 3.2
- Intel Trace Analyzer1 and Trace Collector 7.2
2. Using ICT
Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.
2.1. Setting up the environment
Intitialize the Intel compiler and Intel MPI environment:
ini ic openmpi_intel
Initialize ICT environment:
source /opt/intel/itac/7.2.2.006/bin/itacvars.sh source /opt/intel/impi/3.2.2.006/bin/mpivars.sh source /opt/products/idb/11.0/bin/ia32/idbvars.sh export I_MPI_CC=/opt/products/bin/icc
2.2. Compiling MPI applications with Intel MPI
Add
LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)
to the IMB Makefile.
2.3. MPD daemons
The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:
pax00 pax01 pax02 pax03
The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:
# ./sshconnectivity.exp machines.LINUX # mpdboot -n 4 --rsh=ssh -f ./mpd.hosts
2.4. Tracing information
Run an application for the trace analyzer:
# export VT_LOGFILE_FORMAT=STF; # export VT_PCTRACE=5; # export VT_PROCESS="0:N ON"; # export VT_LOGFILE_PREFIX=IMB-MPI1_inst; # rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX; # mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong; # traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;
After you are done with running and analysing your application shut down the MPD daemons with the command:
# mpdallexit
2.5. Fine tuning
Invoke mpitune:
# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
2.6. Debugging
Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:
# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong