Differences between revisions 1 and 12 (spanning 11 versions)
Revision 1 as of 2010-05-14 12:57:37
Size: 282
Comment:
Revision 12 as of 2010-05-26 12:58:53
Size: 2200
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
----
Line 5: Line 5:
----
Line 6: Line 8:
The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:

 * Intel C++ Compiler 11.1
 * Intel Debugger 11.1
 * Intel MPI Library 3.2
 * Intel MPI Benchmarks 3.2
 * Intel Trace Analyzer1 and Trace Collector 7.2
Line 8: Line 17:
Each of this parts and its use at the [[Cluster|cluster environment]] in DESY Zeuthen is described next.
Line 10: Line 20:
Intitialize the Intel compiler and Intel MPI environment:
{{{#!c ini ic openmpi_intel }}}

Initialize ICT environment:
{{{#!c
 source /opt/intel/ict/3.2.2.013/ictvars.sh
 source /opt/products/idb/11.0/bin/intel64/idbvars.sh
 source /opt/products/bin/ifortvars.sh intel64
 source /opt/products/bin/iccvars.sh intel64
 export IDB_HOME=/opt/products/bin/
}}}
Line 12: Line 33:
Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile.
Line 13: Line 35:
== Producing tracing information == == MPD daemons ==
TODO: machines.LINUX file
TODO: mpd.hosts
Line 15: Line 39:
== Viewing tracing information and fine tuning == Start MPD daemons on all hosts in mpd.hosts file:
{{{#!c
# ./sshconnectivity.exp machines.LINUX
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts
}}}

== Tracing information ==
Run an application for the trace analyzer:
{{{#!c
# export VT_LOGFILE_FORMAT=STF;
# export VT_PCTRACE=5;
# export VT_PROCESS="0:N ON";
# export VT_LOGFILE_PREFIX=IMB-MPI1_inst;
# rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX;
# mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong;
# traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;
}}}

After you are done with running and analysing your application shut down the MPD daemons with the command:
{{{#!c
# mpdallexit
}}}
== Fine tuning ==
Invoke mpitune:
{{{#!c
# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
}}}
Line 18: Line 69:
Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:
{{{#!c
# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong
}}}



1. Overview

The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:

  • Intel C++ Compiler 11.1
  • Intel Debugger 11.1
  • Intel MPI Library 3.2
  • Intel MPI Benchmarks 3.2
  • Intel Trace Analyzer1 and Trace Collector 7.2

2. Using ICT

Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.

2.1. Setting up the environment

Intitialize the Intel compiler and Intel MPI environment: #!c ini ic openmpi_intel 

Initialize ICT environment:

 source /opt/intel/ict/3.2.2.013/ictvars.sh
 source /opt/products/idb/11.0/bin/intel64/idbvars.sh
 source /opt/products/bin/ifortvars.sh intel64
 source /opt/products/bin/iccvars.sh intel64
 export IDB_HOME=/opt/products/bin/

2.2. Compiling MPI applications with Intel MPI

Add  LDFLAGS     = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)  to the IMB Makefile.

2.3. MPD daemons

TODO: machines.LINUX file TODO: mpd.hosts

Start MPD daemons on all hosts in mpd.hosts file:

# ./sshconnectivity.exp machines.LINUX
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts

2.4. Tracing information

Run an application for the trace analyzer:

# export VT_LOGFILE_FORMAT=STF;
# export VT_PCTRACE=5;
# export VT_PROCESS="0:N ON";
# export VT_LOGFILE_PREFIX=IMB-MPI1_inst;
# rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX;
# mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong;
# traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;

After you are done with running and analysing your application shut down the MPD daemons with the command:

# mpdallexit

2.5. Fine tuning

Invoke mpitune:

# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong 

2.6. Debugging

Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:

# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong

IntelClusterToolkit (last edited 2010-10-13 16:09:44 by KonstantinBoyanov)