Differences between revisions 13 and 21 (spanning 8 versions)
Revision 13 as of 2010-05-26 15:25:37
Size: 2204
Comment:
Revision 21 as of 2010-07-06 14:21:34
Size: 2415
Comment:
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
 * Intel C++ Compiler 11.1 
 * Intel Debugger 11.1 
 * Intel MPI Library 3.2 
 * Intel C++ Compiler 11.1
 * Intel Debugger 11.1
 * Intel MPI Library 3.2
Line 21: Line 21:
{{{#!c 
ini ic openmpi_intel 

{{{#!c
ini ic openmpi_intel
Line 24: Line 25:
Initialize ICT environment:
Line 25: Line 27:
Initialize ICT environment:
Line 27: Line 28:
 source /opt/intel/ict/3.2.2.013/ictvars.sh
 source /opt/products/idb/11.0/bin/intel64/idbvars.sh
 source /opt/products/bin/ifortvars.sh intel64
 source
/opt/products/bin/iccvars.sh intel64
 export IDB_HOME=/opt/products/bin/
 source /opt/intel/itac/7.2.2.006/bin/itacvars.sh
 source /opt/intel/impi
/3.2.2.006/bin/mpivars.sh
 source /opt/products/idb/11.0/bin/ia32/idbvars.sh
 export I_MPI_CC=/opt/products/bin/icc
Line 33: Line 33:
== Compiling MPI applications with Intel MPI ==
Add
Line 34: Line 36:
== Compiling MPI applications with Intel MPI ==
Add
{{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile.
{{{
LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)
}}}
to the IMB Makefile.
Line 38: Line 42:
TODO: machines.LINUX file The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the imachines.LINUX file. The script is available
Line 42: Line 47:
Line 46: Line 52:
Line 49: Line 54:
Line 58: Line 64:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 59: Line 66:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 65: Line 71:
Line 67: Line 74:
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong  # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
Line 69: Line 76:
Line 72: Line 78:



1. Overview

The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:

  • Intel C++ Compiler 11.1
  • Intel Debugger 11.1
  • Intel MPI Library 3.2
  • Intel MPI Benchmarks 3.2
  • Intel Trace Analyzer1 and Trace Collector 7.2

2. Using ICT

Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.

2.1. Setting up the environment

Intitialize the Intel compiler and Intel MPI environment:

ini ic openmpi_intel

Initialize ICT environment:

 source /opt/intel/itac/7.2.2.006/bin/itacvars.sh
 source /opt/intel/impi/3.2.2.006/bin/mpivars.sh
 source /opt/products/idb/11.0/bin/ia32/idbvars.sh
 export I_MPI_CC=/opt/products/bin/icc

2.2. Compiling MPI applications with Intel MPI

Add

LDFLAGS     = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)

to the IMB Makefile.

2.3. MPD daemons

The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the imachines.LINUX file. The script is available

TODO: mpd.hosts

Start MPD daemons on all hosts in mpd.hosts file:

# ./sshconnectivity.exp machines.LINUX
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts

2.4. Tracing information

Run an application for the trace analyzer:

# export VT_LOGFILE_FORMAT=STF;
# export VT_PCTRACE=5;
# export VT_PROCESS="0:N ON";
# export VT_LOGFILE_PREFIX=IMB-MPI1_inst;
# rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX;
# mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong;
# traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;

After you are done with running and analysing your application shut down the MPD daemons with the command:

# mpdallexit

2.5. Fine tuning

Invoke mpitune:

# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong

2.6. Debugging

Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:

# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong

IntelClusterToolkit (last edited 2010-10-13 16:09:44 by KonstantinBoyanov)