Full-day tutorial at the International Supercomputing Conference 2014 (ISC 14), June 22-26, 2014, Leipzig, Germany:
Node-Level Performance Engineering
Slides: isc14_tutorial_NLPE-slides-final.pdf
Source code for the demos: Code-ISC14-Tutorial01.zip
Authors/Presenters
Georg Hager1, Jan Treibig1, and Gerhard Wellein2
1 Erlangen Regional Computing Center
2 Department of Computer Science
University of Erlangen-Nuremberg
Germany
{georg.hager,jan.treibig,gerhard.wellein}@fau.de
Abstract
This tutorial covers performance engineering approaches on the compute node level. “Performance engineering” is more than employing tools to identify hotspots and blindly applying textbook optimizations. It is about developing a thorough understanding of the interactions between software and hardware. This process starts at the core, socket, and node level, where the code gets executed that does the actual “work.” Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted.
We start by giving an overview of modern processor and node architectures, including accelerators such as GPGPUs and Xeon Phi. Typical bottlenecks such as instruction throughput and data transfers are identified using kernel benchmarks and put into the architectural context. The impact of optimizations like SIMD vectorization, ccNUMA placement, and cache blocking is shown, and different aspects of a “holistic” node-level performance engineering strategy are demonstrated. Using the LIKWID multicore tools we show the importance of topology awareness, affinity enforcement, and hardware metrics. The latter are used to support the performance engineering process by supplying information that can validate or falsify performance models. Case studies on sparse matrix-vector multiplication and a conjugate gradient solver conclude the tutorial.