Differences between revisions 9 and 28 (spanning 19 versions)
Revision 9 as of 2010-05-14 16:47:46
Size: 1975
Comment:
Revision 28 as of 2010-10-13 16:09:44
Size: 2842
Comment:
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
* Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2  * Intel MPI Benchmarks 3.2 * Intel Trace Analyzer1 and Trace Collector 7.2  * Intel C++ Compiler 11.1
* Intel Debugger 11.1
* Intel MPI Library 3.2
* Intel MPI Benchmarks 3.2
* Intel Trace Analyzer1 and Trace Collector 7.2
Line 17: Line 21:
{{{#!c ini ic openmpi_intel }}}
Line 19: Line 22:
{{{#!c
ini ic openmpi_intel
}}}
Line 20: Line 26:
Line 21: Line 28:
 source /opt/intel/ict/3.2.2.013/ictvars.sh
 source /opt/products/bin/idbvars.sh
 source /opt/products/bin/ifortvars.sh
 source /opt/products/bin/iccvars.sh
 export IDB_HOME=/opt/products/bin/

 source /opt/intel/itac/8.0.0.011/bin/itacvars.sh
 source /opt/intel/impi/4.0.0.028/bin/mpivars.sh
 source /opt/intel/ictce/4.0.0.020/ictvars.sh
 source /opt/products/idb/11.0/bin/ia32/idbvars.sh
 export I_MPI_CC=/opt/products/bin/icc

}}}
== Compiling MPI applications with Intel MPI ==
Add

{{{
LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)
}}}
to the IMB Makefile.

== MPD daemons ==
The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:
{{{#!c
pax00
pax01
pax02
pax03
Line 28: Line 53:
== Compiling MPI applications with Intel MPI ==
Add {{{ LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) }}} to the IMB Makefile.

== MPD daemons ==
Start MPD daemons on all hosts in mpd.hosts file:
The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:
Line 37: Line 58:
Line 40: Line 60:
Line 49: Line 70:
After you are done with running and analysing your application shut down the MPD daemons with the command:
Line 50: Line 72:
{{{#!c
# mpdallexit
}}}
Line 52: Line 77:
Line 54: Line 80:
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong  # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
Line 56: Line 82:
Line 59: Line 84:
Line 62: Line 88:

<<AttachList>>



1. Overview

The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:

  • Intel C++ Compiler 11.1
  • Intel Debugger 11.1
  • Intel MPI Library 3.2
  • Intel MPI Benchmarks 3.2
  • Intel Trace Analyzer1 and Trace Collector 7.2

2. Using ICT

Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.

2.1. Setting up the environment

Intitialize the Intel compiler and Intel MPI environment:

ini ic openmpi_intel

Initialize ICT environment:

 source /opt/intel/itac/8.0.0.011/bin/itacvars.sh
 source /opt/intel/impi/4.0.0.028/bin/mpivars.sh
 source /opt/intel/ictce/4.0.0.020/ictvars.sh
 source /opt/products/idb/11.0/bin/ia32/idbvars.sh
 export I_MPI_CC=/opt/products/bin/icc

2.2. Compiling MPI applications with Intel MPI

Add

LDFLAGS     = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS)

to the IMB Makefile.

2.3. MPD daemons

The first step in setting up the MPD daemons, which are actually the environment for strating the parallel applications, is to set up an SSH connectivity environment with the help of the sshconnectivity.exp script and the machines.LINUX file. The script is available in the attachment list at the end of this page. An example machines.LINUX (as well as mpd.hosts) file(s) could look like this:

pax00
pax01
pax02
pax03

The mpd.hosts is read by the MPD programm when starting the daemons and serves as a configuration file, describing on which hosts should the daemons get started. So starting MPD daemons on all hosts with the above mpd.hosts file can be done in the following way:

# ./sshconnectivity.exp machines.LINUX
# mpdboot -n 4 --rsh=ssh -f ./mpd.hosts

2.4. Tracing information

Run an application for the trace analyzer:

# export VT_LOGFILE_FORMAT=STF;
# export VT_PCTRACE=5;
# export VT_PROCESS="0:N ON";
# export VT_LOGFILE_PREFIX=IMB-MPI1_inst;
# rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX;
# mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong;
# traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;

After you are done with running and analysing your application shut down the MPD daemons with the command:

# mpdallexit

2.5. Fine tuning

Invoke mpitune:

# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong
# mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong

2.6. Debugging

Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:

# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong

  • [get | view] (2010-07-06 14:20:08, 26.0 KB) [[attachment:sshconnectivity.exp]]
 All files | Selected Files: delete move to page copy to page

IntelClusterToolkit (last edited 2010-10-13 16:09:44 by KonstantinBoyanov)