Differences between revisions 16 and 24 (spanning 8 versions)
Revision 16 as of 2010-06-22 16:20:42
Size: 2166
Comment:
Revision 24 as of 2010-07-16 13:57:12
Size: 2836
Comment:
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
 * Intel C++ Compiler 11.1 
 * Intel Debugger 11.1 
 * Intel MPI Library 3.2 
 * Intel C++ Compiler 11.1
 * Intel Debugger 11.1
 * Intel MPI Library 3.2
Line 21: Line 21:
{{{#!c 
ini ic openmpi_intel 

{{{#!c
ini ic openmpi_intel
Line 24: Line 25:
Initialize ICT environment:
Line 25: Line 27:
Initialize ICT environment:
Line 27: Line 28:
 source /opt/intel/ict/3.2.2.013/ictvars.sh
Line 32: Line 34:
== Compiling MPI applications with Intel MPI ==
Add
Line 33: Line 37:
== Compiling MPI applications with Intel MPI ==
Add
{{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile.
{{{
LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)
}}}
to the IMB Makefile.
Line 37: Line 43:
TODO: machines.LINUX file
TODO: mpd.hosts
The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:
{{{#!c
pax00
pax01
pax02
pax03
}}}
Line 40: Line 51:
Start MPD daemons on all hosts in mpd.hosts file: The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:
Line 43: Line 54:
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts # mpdboot -n 4 --rsh=ssh -f ./mpd.hosts
Line 45: Line 56:
Line 48: Line 58:
Line 57: Line 68:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 58: Line 70:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 64: Line 75:
Line 66: Line 78:
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong  # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
Line 68: Line 80:
Line 71: Line 82:
Line 74: Line 86:

<<AttachList>>



1. Overview

The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:

  • Intel C++ Compiler 11.1
  • Intel Debugger 11.1
  • Intel MPI Library 3.2
  • Intel MPI Benchmarks 3.2
  • Intel Trace Analyzer1 and Trace Collector 7.2

2. Using ICT

Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.

2.1. Setting up the environment

Intitialize the Intel compiler and Intel MPI environment:

ini ic openmpi_intel

Initialize ICT environment:

 source /opt/intel/ict/3.2.2.013/ictvars.sh
 source /opt/intel/itac/7.2.2.006/bin/itacvars.sh
 source /opt/intel/impi/3.2.2.006/bin/mpivars.sh
 source /opt/products/idb/11.0/bin/ia32/idbvars.sh
 export I_MPI_CC=/opt/products/bin/icc

2.2. Compiling MPI applications with Intel MPI

Add

LDFLAGS     = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)

to the IMB Makefile.

2.3. MPD daemons

The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:

pax00
pax01
pax02
pax03

The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:

# ./sshconnectivity.exp machines.LINUX
# mpdboot -n 4 --rsh=ssh -f ./mpd.hosts

2.4. Tracing information

Run an application for the trace analyzer:

# export VT_LOGFILE_FORMAT=STF;
# export VT_PCTRACE=5;
# export VT_PROCESS="0:N ON";
# export VT_LOGFILE_PREFIX=IMB-MPI1_inst;
# rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX;
# mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong;
# traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;

After you are done with running and analysing your application shut down the MPD daemons with the command:

# mpdallexit

2.5. Fine tuning

Invoke mpitune:

# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong

2.6. Debugging

Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:

# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong

  • [get | view] (2010-07-06 14:20:08, 26.0 KB) [[attachment:sshconnectivity.exp]]
 All files | Selected Files: delete move to page copy to page

IntelClusterToolkit (last edited 2010-10-13 16:09:44 by KonstantinBoyanov)