GPU Acceleration Tutorial

John Ashley
Senior IBM Software Developer Relations Manager
NVIDIA Corporation

 

Part 1: Introduction to GPU Acceleration

This session will assume no prior knowledge of GPU accelerated computing. By the end of the session, attendees will have learned what GPU acceleration is, how and why it works, and when it may (and may not) be useful. The NVIDIA roadmap will be covered, especially as it pertains to Power system architecture hosts

[download slides]

Supplemental materials: Companion Self-Paced Lab

  • Introduction to GPU Computing

 

Part 2: GPU Acceleration of Scientific Computing

This session will cover the four main methods of accelerating applications using GPUs – using already accelerated applications, using GPU accelerated libraries, using OpenACC directives, and CUDA extended C/C++, Thrust and other programming languages. A brief recap of GPU Accelerated computing will be provided prior to the discussion of programming topics.  We will also discuss the potential impacts of NVLINK and stacked memory on expected performance for future architectures.

[download slides]

Supplemental materials:  Companion Self Paced Labs

  • Accelerating Applications with GPU-Accelerated Libraries in C/C++ or Fortran or Python
  • OpenACC – 2X in 4 Steps in C/C++ or Fortran
  • Accelerating Applications with CUDA C/C++ or Fortran or Python
  • Using Thrust to Accelerate C/C++

 

Part 3: GPU Accelerated Application Performance Tuning

This session will cover a selection of performance tuning techniques for GPU programming. While aimed at intermediate to advanced GPU programmers, the material should be accessible to anyone with a basic understanding of programming.

[download slides]

Supplemental material: Companion Self Paced Labs

  • GPU Memory Optimizations in C/C++ or Fortran