Differences between revisions 15 and 23 (spanning 8 versions)
Revision 15 as of 2010-06-15 13:07:37
Size: 2293
Comment:
Revision 23 as of 2010-07-06 14:28:06
Size: 2791
Comment:
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
 * Intel C++ Compiler 11.1 
 * Intel Debugger 11.1 
 * Intel MPI Library 3.2 
 * Intel C++ Compiler 11.1
 * Intel Debugger 11.1
 * Intel MPI Library 3.2
Line 21: Line 21:
{{{#!c 
ini ic openmpi_intel 

{{{#!c
ini ic openmpi_intel
Line 24: Line 25:
Initialize ICT environment:
Line 25: Line 27:
Initialize ICT environment:
Line 31: Line 32:
}}}
== Compiling MPI applications with Intel MPI ==
Add
Line 32: Line 36:
 source /opt/products/bin/ifortvars.sh ia32
 source /opt/products/bin/iccvars.sh ia32
 export IDB_HOME=/opt/products/bin/
{{{
LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)
}}}
to the IMB Makefile.

== MPD daemons ==
The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:
{{{#!c
pax00
pax01
pax02
pax03
Line 37: Line 50:
== Compiling MPI applications with Intel MPI ==
Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile.

== MPD daemons ==
TODO: machines.LINUX file
TODO: mpd.hosts

Start MPD daemons on all hosts in mpd.hosts file:
The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:
Line 47: Line 53:
# mpdboot -n 8 --rsh=ssh -f ./mpd.hosts # mpdboot -n 4 --rsh=ssh -f ./mpd.hosts
Line 49: Line 55:
Line 52: Line 57:
Line 61: Line 67:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 62: Line 69:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 68: Line 74:
Line 70: Line 77:
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong  # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
Line 72: Line 79:
Line 75: Line 81:
Line 78: Line 85:

<<AttachList>>



1. Overview

The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:

  • Intel C++ Compiler 11.1
  • Intel Debugger 11.1
  • Intel MPI Library 3.2
  • Intel MPI Benchmarks 3.2
  • Intel Trace Analyzer1 and Trace Collector 7.2

2. Using ICT

Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.

2.1. Setting up the environment

Intitialize the Intel compiler and Intel MPI environment:

ini ic openmpi_intel

Initialize ICT environment:

 source /opt/intel/itac/7.2.2.006/bin/itacvars.sh
 source /opt/intel/impi/3.2.2.006/bin/mpivars.sh
 source /opt/products/idb/11.0/bin/ia32/idbvars.sh
 export I_MPI_CC=/opt/products/bin/icc

2.2. Compiling MPI applications with Intel MPI

Add

LDFLAGS     = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)

to the IMB Makefile.

2.3. MPD daemons

The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:

pax00
pax01
pax02
pax03

The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:

# ./sshconnectivity.exp machines.LINUX
# mpdboot -n 4 --rsh=ssh -f ./mpd.hosts

2.4. Tracing information

Run an application for the trace analyzer:

# export VT_LOGFILE_FORMAT=STF;
# export VT_PCTRACE=5;
# export VT_PROCESS="0:N ON";
# export VT_LOGFILE_PREFIX=IMB-MPI1_inst;
# rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX;
# mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong;
# traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;

After you are done with running and analysing your application shut down the MPD daemons with the command:

# mpdallexit

2.5. Fine tuning

Invoke mpitune:

# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong

2.6. Debugging

Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:

# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong

  • [get | view] (2010-07-06 14:20:08, 26.0 KB) [[attachment:sshconnectivity.exp]]
 All files | Selected Files: delete move to page copy to page

IntelClusterToolkit (last edited 2010-10-13 16:09:44 by KonstantinBoyanov)