2065
Comment:
|
2795
|
Deletions are marked like this. | Additions are marked like this. |
Line 10: | Line 10: |
* Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2 |
* Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2 |
Line 21: | Line 21: |
{{{#!c ini ic openmpi_intel }}} | |
Line 23: | Line 22: |
{{{#!c ini ic openmpi_intel }}} |
|
Line 24: | Line 26: |
Line 25: | Line 28: |
source /opt/intel/ict/3.2.2.013/ictvars.sh source /opt/products/idb/11.0/bin/intel64/idbvars.sh source /opt/products/bin/ifortvars.sh intel64 source /opt/products/bin/iccvars.sh intel64 export IDB_HOME=/opt/products/bin/ |
source /opt/intel/itac/8.0.0.011/bin/itacvars.sh source /opt/intel/impi/4.0.0.028/bin/mpivars.sh source /opt/products/idb/11.0/bin/ia32/idbvars.sh export I_MPI_CC=/opt/products/bin/icc }}} == Compiling MPI applications with Intel MPI == Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile. == MPD daemons == The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this: {{{#!c pax00 pax01 pax02 pax03 |
Line 32: | Line 52: |
== Compiling MPI applications with Intel MPI == Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile. == MPD daemons == TODO: machines.LINUX file TODO: mpd.hosts Start MPD daemons on all hosts in mpd.hosts file: |
The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way: |
Line 42: | Line 55: |
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts | # mpdboot -n 4 --rsh=ssh -f ./mpd.hosts |
Line 44: | Line 57: |
Line 47: | Line 59: |
Line 56: | Line 69: |
After you are done with running and analysing your application shut down the MPD daemons with the command: | |
Line 57: | Line 71: |
{{{#!c # mpdallexit }}} |
|
Line 59: | Line 76: |
Line 61: | Line 79: |
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong | # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong |
Line 63: | Line 81: |
Line 66: | Line 83: |
Line 69: | Line 87: |
<<AttachList>> |
Contents
1. Overview
The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:
- Intel C++ Compiler 11.1
- Intel Debugger 11.1
- Intel MPI Library 3.2
- Intel MPI Benchmarks 3.2
- Intel Trace Analyzer1 and Trace Collector 7.2
2. Using ICT
Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.
2.1. Setting up the environment
Intitialize the Intel compiler and Intel MPI environment:
ini ic openmpi_intel
Initialize ICT environment:
source /opt/intel/itac/8.0.0.011/bin/itacvars.sh source /opt/intel/impi/4.0.0.028/bin/mpivars.sh source /opt/products/idb/11.0/bin/ia32/idbvars.sh export I_MPI_CC=/opt/products/bin/icc
2.2. Compiling MPI applications with Intel MPI
Add
LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)
to the IMB Makefile.
2.3. MPD daemons
The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:
pax00 pax01 pax02 pax03
The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:
# ./sshconnectivity.exp machines.LINUX # mpdboot -n 4 --rsh=ssh -f ./mpd.hosts
2.4. Tracing information
Run an application for the trace analyzer:
# export VT_LOGFILE_FORMAT=STF; # export VT_PCTRACE=5; # export VT_PROCESS="0:N ON"; # export VT_LOGFILE_PREFIX=IMB-MPI1_inst; # rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX; # mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong; # traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;
After you are done with running and analysing your application shut down the MPD daemons with the command:
# mpdallexit
2.5. Fine tuning
Invoke mpitune:
# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
2.6. Debugging
Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:
# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong