Differences between revisions 27 and 28
Revision 27 as of 2011-06-06 15:02:31
Size: 4725
Comment:
Revision 28 as of 2011-06-06 15:03:16
Size: 4734
Comment:
Deletions are marked like this. Additions are marked like this.
Line 35: Line 35:
 1 Memory Bandwidth for unpinned memory and synchronous / asynchronous transfers  1. Memory Bandwidth for unpinned memory and synchronous / asynchronous transfers
Line 39: Line 39:
 2 Latency of host-ot-GPU memory copy operations for mulltiple GPUs  2. Latency of host-ot-GPU memory copy operations for mulltiple GPUs
Line 42: Line 42:
  1 Latecny of host-to-GPU memory copy operations for parallel configuration   1. Latecny of host-to-GPU memory copy operations for parallel configuration
Line 44: Line 44:
  2 Latency of host-to-GPU memory copy operations for corss configuration   2. Latency of host-to-GPU memory copy operations for corss configuration
Line 47: Line 47:
 3 Bandwidth and latency of GPU-to-GPU communication
  1 mpirun options and rankfiles
  2 MPI send/recv vs. CUDA 4.0 peer-to-peer communication primitives
 3. Bandwidth and latency of GPU-to-GPU communication
  1. mpirun options and rankfiles
  2. MPI send/recv vs. CUDA 4.0 peer-to-peer communication primitives
Line 51: Line 51:
 4 GPU-to-InfiniBand hardware datapath propagation delay
  1 perftest measurements
 4. GPU-to-InfiniBand hardware datapath propagation delay
  1. perftest measurements



1. Overview

General introduciton to initial GPE project can be found here.

2. GPU Hardware

The current GPU system at DESY (Zeuthen) consists of a single server with dual nVidia Tesla C2050 GPU cards. It is hosted on gpu1 and is also used as a testbed for new developments in GPU-to-GPU networking with custom designed interconnects and InfiniBand.

3. Environment

Currently on the system the newest version of the CUDA SDK 4.0 alongside with device drivers and libraries are installed on gpu1. The Software Development Kit provides the following:

  • CUDA C/C++ Compiler
  • GPU Debugging & Profiling Tools

  • GPU-Accelerated Math Libraries
  • GPU-Accelerated Performance Primitives (Thrust library)
  • GPU-direct (under tests)

4. GPU Benchmarks

For evaluation and development a set of common benchmarks, as well as specially designed micro benchmarks were run on the gpu1 system.

4.1. Low-level benchmarks

Custom designed benchmarks use OpenMPI and OpenMP for task parallelization and allocation on the host CPUs and evaluate the following performance metrics:

  1. Memory Bandwidth for unpinned memory and synchronous / asynchronous transfers
    • Here the bandwidth of host to device memory copy operations is measured. The host memory areas used are allocated with common malloc() calls and are not pinned to physical page addresses - thus they are subject to page swapping. On the other hand memory is pinned when allocating it via the cudaHostAlloc() call. For differentiating between synchronous and asynchronous transfers we used cudaMemcpy and cudaMemcpyasync. The effects of both transfer type and memory pinning are scribed here.

  2. Latency of host-ot-GPU memory copy operations for mulltiple GPUs
    • Here the latency for host to device memory copy operations is measured. This time however the memory regions of the host memory are pinned to physical addresses and only asynchronous memory transfer is used. The difference in the setup here is that both GPUs run the benchmark simultaneously, and we differe between two configurations - parallel (host process running on CPU socket 0 uses GPU 0, and process running on CPU socket 1 uses GPU 1) and cross (process on CPU socet 0 uses GPU 1 and vice versa).
    1. Latecny of host-to-GPU memory copy operations for parallel configuration

Measurement with RDTSC for process with rank 0 and two GPUs working "parallel"

  1. Latency of host-to-GPU memory copy operations for corss configuration

Measurement with RDTSC for process with rank 0 and two GPUs working "crossed"

  1. Bandwidth and latency of GPU-to-GPU communication
    1. mpirun options and rankfiles
    2. MPI send/recv vs. CUDA 4.0 peer-to-peer communication primitives
  2. GPU-to-InfiniBand hardware datapath propagation delay

    1. perftest measurements

5. GPU Applications

/!\ Recent applications utilizing the gpu1 system at DESY (Zeuthen) are Chroma-based LQCD munerical simulationd and applications from the field of Astro Particle Physics.

5.1. Application-level benchmarks

For ensuring consistency of performance with real-world applications, the Scalable HeterOgeneous Computing (SHOC) benchmark suite was run on the gpu1 system too. This benchmark suite gives not only benchmark results for CUDA implementations of key algorithms but also corresponding implementations of the more general OpenCL parallel programming framework.

5.2. Debugger and Profiler Tools

  • Compute Visual profiler
  • CUDA Debugger

6. Monitoring

/!\

  • gpu1 in Nagios
    • GPU cores temperature
    • host and device free and/or used memory
    • GPU core frequency
    • GPU cores utilization/load


Sections marked with /!\ need further discussion

GPU (last edited 2017-05-15 09:10:53 by GötzWaschk)