1975
Comment:
|
2200
|
Deletions are marked like this. | Additions are marked like this. |
Line 10: | Line 10: |
* Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2 * Intel MPI Benchmarks 3.2 * Intel Trace Analyzer1 and Trace Collector 7.2 | * Intel C++ Compiler 11.1 * Intel Debugger 11.1 * Intel MPI Library 3.2 * Intel MPI Benchmarks 3.2 * Intel Trace Analyzer1 and Trace Collector 7.2 |
Line 22: | Line 26: |
source /opt/products/bin/idbvars.sh source /opt/products/bin/ifortvars.sh source /opt/products/bin/iccvars.sh |
source /opt/products/idb/11.0/bin/intel64/idbvars.sh source /opt/products/bin/ifortvars.sh intel64 source /opt/products/bin/iccvars.sh intel64 |
Line 32: | Line 36: |
TODO: machines.LINUX file TODO: mpd.hosts |
|
Line 35: | Line 42: |
# mpdboot -n 4 --rsh=ssh -f ./mpd.hosts | # mpdboot -n 8 --rsh=ssh -f ./mpd.hosts |
Line 50: | Line 57: |
After you are done with running and analysing your application shut down the MPD daemons with the command: {{{#!c # mpdallexit }}} |
Contents
1. Overview
The Intel Cluster Toolkit is a collection of MPI related tools, which is helpful when debugging, fine tuning and analyzing large MPI applications. ICT includes:
- Intel C++ Compiler 11.1
- Intel Debugger 11.1
- Intel MPI Library 3.2
- Intel MPI Benchmarks 3.2
- Intel Trace Analyzer1 and Trace Collector 7.2
2. Using ICT
Each of this parts and its use at the cluster environment in DESY Zeuthen is described next.
2.1. Setting up the environment
Intitialize the Intel compiler and Intel MPI environment: #!c ini ic openmpi_intel
Initialize ICT environment:
source /opt/intel/ict/3.2.2.013/ictvars.sh source /opt/products/idb/11.0/bin/intel64/idbvars.sh source /opt/products/bin/ifortvars.sh intel64 source /opt/products/bin/iccvars.sh intel64 export IDB_HOME=/opt/products/bin/
2.2. Compiling MPI applications with Intel MPI
Add LDFLAGS = -L$(VT_LIB_DIR) -lmpi $(VT_ADD_LIBS) to the IMB Makefile.
2.3. MPD daemons
TODO: machines.LINUX file TODO: mpd.hosts
Start MPD daemons on all hosts in mpd.hosts file:
# ./sshconnectivity.exp machines.LINUX # mpdboot -n 8 --rsh=ssh -f ./mpd.hosts
2.4. Tracing information
Run an application for the trace analyzer:
# export VT_LOGFILE_FORMAT=STF; # export VT_PCTRACE=5; # export VT_PROCESS="0:N ON"; # export VT_LOGFILE_PREFIX=IMB-MPI1_inst; # rm -fr $VT_LOGFILE_PREFIX; mkdir $VT_LOGFILE_PREFIX; # mpiexec -n 2 itcpin --run -- ./IMB-MPI1 -npmin 2 PingPong; # traceanalyzer ./IMB-MPI1_inst/IMB-MPI1.stf &;
After you are done with running and analysing your application shut down the MPD daemons with the command:
# mpdallexit
2.5. Fine tuning
Invoke mpitune:
# mpitune -f machines.LINUX -o ./ --app mpiexec -genv MPIEXEC_DEBUG 1 -n 2 ./IMB-MPI1 -np 2 PingPong # mpiexec -tune app.conf -n 4 ./IMB-MPI1 -np 2 PingPong
2.6. Debugging
Start execution of the MPI program in debugging mode on 8 nodes each running 2 processes:
# mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 8 ./IMB-MPI1 -np 2 PingPong