Georg Hager's Blog

Random thoughts on High Performance Computing

Content

SIAM Parallel Processing 24 (PP24) Minisymposium on “Advancements in Sparse Linear Algebra: Hardware-Aware Algorithms and Optimization Techniques”

Together with Christie Alappat and Gerhard Wellein, I am organizing a two-part minisymposium at SIAM Parallel Processing 2024 in Baltimore, MD (full conference program available), on March 7, 2024. It is titled “Advancements in Sparse Linear Algebra: Hardware-Aware Algorithms and Optimization Techniques.” This is the abstract:

Over the last decade, the landscape of computer architecture has undergone tremendous transformations. At the same time, the field of sparse linear algebra (LA) has experienced a resurgence in interest, witnessing the emergence of novel algorithms and the revival of traditional ones. The irregular access patterns inherent in sparse LA often pose significant challenges for efficient execution on highly parallel modern computing devices. This minisymposium delves into diverse algorithmic and programming techniques that address these challenges. We will explore various methods for enhancing the computational intensity and node-level performance of sparse algorithms, along with approaches to improve their scalability. The topics covered encompass mixed precision computation, batched solvers, methods for reducing or hiding communication, cache blocking, and the efficient utilization of parallel paradigms. Naturally, the implementation of some of these techniques is not straightforward and may necessitate algorithmic reformulation. The discussions will shed light on this aspect while also examining their applications, benefits, and limitations. The primary objective is to present an overview of cutting-edge sparse LA techniques that effectively leverage hardware capabilities.

Here’s an overview of the agenda:

Part 1 – MS43 (March 7, 11:00 am – 1:00 pm EST)

11:00-11:25 Accelerating Sparse Solvers with Cache-Optimized Matrix Power Kernels abstract
Christie Louis Alappat, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany

11:30-11:55 Efficient Schwarz Preconditioning Techniques on Current Hardware Using FROSch abstract
Alexander Heinlein
, Delft University of Technology, Netherlands; Sivasankaran Rajamanickam and Ichitaro Yamazaki, Sandia National Laboratories, U.S.

12:00-12:25 Sparse Algorithms for Large-Scale Bayesian Inference Problems abstract
Lisa Gaedke-Merzhäuser
and Olaf Schenk, Università della Svizzera italiana, Switzerland

12:30-12:55 Communication-Reduced Sparse-Dense Matrix Multiplication with Adaptive Parallelization abstract
Hua Huang
and Edmond Chow, Georgia Institute of Technology, U.S.

Part 2 – MS54 (March 7, 3:45 pm – 5:45 pm EST)

3:45-4:10 Residual Inverse Formulation of the Feast Eigenvalue Algorithm Using Mixed-Precision and Inexact System Solves abstract
Ivan Williams
and Eric Polizzi, University of Massachusetts, Amherst, U.S.

4:15-4:40 How Mixed Precision Can Accelerate Sparse Solvers abstract
Hartwig Anzt
, University of Tennessee, U.S.; Terry Cojean, Pratik Nayak, Thomas Gruetzmacher, Yu-Hsiang Mike Tsai, Marcel Koch, Tobias Ribizel, Fritz Goebel, and Gregor Olenik, Karlsruhe Institute of Technology, Germany

4:45-5:10 Algebraic Programming for High Performance Auto-Parallelised Solvers abstract
Albert Jan N. Yzelman
, Denis Jelovina, Aristeidis Mastoras, Alberto Scolari, and Daniele Giuseppe Spampinato, Huawei Technologies Switzerland AG, Switzerland

5:15-5:40 Pipelined Sparse Solvers: Can More Reliable Computations Help Us to Converge Faster? abstract
Roman Iakymchuk
, Umeå University and Uppsala University, Sweden

 

Node-Level Performance Engineering tutorial to be featured again at SC23

SC23 LogoOur popular “Node-Level Performance Engineering” full-day tutorial has been accepted again (now the twelfth time in a row!) for presentation at SC23, the International Conference for High Performance Computing, Networking, Storage and Analysis. Together with Thomas Gruber and Gerhard Wellein I will teach the basics of node-level computer architecture, the LIKWID performance tools suite, analytic performance modeling (via the Roofline model), and model-guided optimization. Find the details in the official SC23 agenda.

Get the gist of it in our flashy promo video:

PERMAVOST Workshop submission deadline approaching

PERMAVOST 2023, the 3rd Workshop on Performance EngineeRing, Modelling, Analysis, and VisualizatiOn STrategy, is calling for papers. The workshop will be held in conjunction with ACM HPDC 2023 in Orlando, FL.

Modern software engineering is getting increasingly complicated. Effective monitoring and analysis of High-Performance Computing (HPC) applications and infrastructure is critical for ongoing improvement, design, and maintenance. The development and maintenance of these applications expand beyond the realm of computer science to encompass a diverse range of experts in mathematics, science, and other engineering disciplines. Many developers from these disciplines rely increasingly on the tools created by computer scientists to analyze and optimize their code. Thus, there is a pressing need for a forum to bring together a diverse group of experts between these different communities to share their experiences, collaborate on solving challenges, and work towards advancing the field of HPC performance analysis and optimization.

  • Submission deadline: April 1, 2023, 11:59 pm Anywhere on Earth (AoE)
  • Notification; April 20, 2023
  • Camera ready deadline: May 12, 2023
  • Workshop: June 20, 2023

Continue reading

New tutorial on “Core-Level Performance Engineering” accepted for ICPE 2023

ICPE 2023 LogoOur brand-new tutorial “Core-Level Performance Engineering” has been accepted as a full-day tutorial at ICPE 2023, the 14th ACM/SPEC International Conference on Performance Engineering. This tutorial concentrates on the in-core aspects of performance modeling and analysis on CPUs. We use Matt Godbolt’s Compiler Explorer and our Open-Source Architecture Code Analyzer (OSACA), which is now integrated with the Compiler Explorer, to teach the basics of code execution including pipelining, superscalarity, SIMD, intra-iteration and loop-carried dependencies, and more. Intel/AMD x86 and ARM Neon/SVE assembly code is introduced, and participants can get their hands dirty exploring the depths of machine code execution using only a web browser! Lead OSACA developer Jan Laukemann did most of the work for this exciting new event. Find the details at: https://icpe2023.spec.org/tutorials/tutorial3/.

All slides and some of the exercises are available at: http://tiny.cc/CLPE.

ISC 2022 has started

ISC High Performance 2022 is finally here! After two years of online ISC, people can finally get together and talk face to face (albeit with one or two layers of mask tissue in between). The first “full conference” day (Sunday is traditionally reserved for tutorials) marks some notable events this year.

After a warm welcome and introduction by ISC22 Program Chair Keren Bergman,  Rev Lebaredian (NVIDIA) and Michele Melchiorre (BMW) gave an enthralling keynote about how digital twins are used in industry and how they may become the one thing that you never knew about but desperately need. A digital twin is like a computational model of reality – be it a manufacturing plant, a building, or even a whole city. Such twins are used a lot in design phases, but they are rarely kept up to date over the whole life cycle of the structures they describe. With the advent of powerful AI methods, this may change as AI could close the gap between model and reality. Rev even went so far as to speculate that, given enough computing power, one could use the twin to move back and forth in time for improved insight, forecasting, or decision making.

An emotional moment (well, for a technical event at least) came with the special session for celebrating the life and work of Jack Dongarra, recipient of the 2021 ACM Turing Award. Horst Simon, Tony Hey, David E. Keyes, and Satoshi Matsuoka recalled Jack’s many achievements, the most prominent of which are the BLAS and LAPACK libraries (and their more current descendants), the initial seedling of the MPI standard, sustainable and scalable HPC benchmarks, and numerous contributions to mixed-precision linear algebra methods.

Next came Erich Strohmaier with his long-awaited presentation of the June 2022 Top500 list, in which he analyzed current trends in supercomputing using the historical data that is now available since 1993. One surprising fact about the current list is that there was never such a low turnaround – only 39 systems fell out of the list, and the entry threshold of just above 1.6 Pflop/s hasn’t even changed. On the positive side, Fritz and Alex, the new NHR@FAU systems, are officially on it: Fritz is at #323, despite some network hardware still missing, and Alex is at #184 although the more than 300 NVIDIA A40 GPUs could not even be used for running LINPACK. In addition, Alex struck #16 in the Green500, in which energy efficiency in Gflop/W determines the ranking. With that, Alex is the most energy-efficient system in Germany.

Oh yes, and there’s a new #1: “Frontier” at Oak Ridge National Lab broke the Exaflop barrier. We have now officially entered the age of exascale, at least if we use the debatable LINPACK metric as the yardstick. Erich pointed out that Frontier now encompasses 25% of the total aggregated Top500 performance; grain-of-salty extrapolations indicate that this ratio may go up to 50% in 2030, but who knows which marvels await behind the closed doors of Nvidia, Intel, or AMD labs.

 

Gprofng is the next-generation GNU profiler

This week, Ruud van der Pas of OpenMP fame gave a talk in our NHR PerfLab seminar on gprofng, the next-generation GNU profiling tool. If you ever felt that gprof was sorely lacking features like threading support, sampling, and drilling down to source, gprofng comes to rescue. Now you can profile code without even recompiling it, which comes in handy (not only) if you don’t have the source. It has recently been accepted as part of the Linux binutils package and will inevitably find its way into standard Linux distros. If you don’t want to wait that long, clone the development repo with

git clone git://sourceware.org/git/binutils-gdb.git

and compile it yourself. Here’s the recording of Ruud’s talk, where he explains the basic functions of gprofng and also takes a peek at upcoming features like HTML output and hardware performance counter support:

SIAM Parallel Processing 2022 Minisymposium on “Advances in Performance Modeling of Parallel Code”

Together with Alexandru Calotoiu (ETH Zurich), I am organizing a two-part minisymposium at SIAM Parallel Processing 2022.  It is to take place on February 25, 2022, and it is titled “Advances in Performance Modeling of Parallel Code.” This is the abstract:

Performance modeling is an indispensable tool for the assessment, analysis, prediction, and optimization of parallel code in scientific computing and computational science. Modeling approaches can take a variety of forms, from purely analytic, first-principle models to curve fitting, machine learning, and AI-based solutions. The goals of modeling are just as diverse: Identification of bottlenecks or scaling problems, extrapolation, architectural exploration, and even the prediction of power dissipation and energy consumption can all be supported be modeling procedures. This minisymposium tries to provide an overview of the current state of the art in performance, or more generally, resource modeling of parallel code. The hardware focus will be very broad, from the node to the massively parallel level, including standard multicore systems, GPUs, and reconfigurable hardware. Contributions will cover fundamental research as well as tools development and case studies. After the minisymposium, the organizers plan to issue an open call for a journal special issue.

An impressive lineup of international speakers have been brought together:

Part 1 (February 25, 11:10 am – 12:50 pm PST)

11:10-11:30 Computational Waves in Parallel Programs and Their Impact on Performance Modeling

Ayesha Afzal, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany; Georg Hager, Erlangen National High Performance Computing Center, Germany

11:35-11:55 The Price Performance of Performance Models

Alexandru Calotoiu, ETH Zurich, Switzerland; Alexander Geiss, Benedikt Naumann, Marcus Ritter, and Felix Wolf, Technische Universität Darmstadt, Germany

12:00-12:20 perf-taint: Extracting Clean Performance Models from Tainted Programs

Marcin Copik, ETH Zurich, Switzerland

12:25-12:45 Extra-P Meets Hatchet: Towards Modeling in Performance Analytics

Sergei Shudler, Lawrence Livermore National Laboratory, U.S.

Part 2 (February 25, 3:35 pm – 5:15 pm PST)

3:35-3:55 Performance Modeling of Graph Processing Workloads

Ana Lucia Varbanescu and Merijn Verstraaten, University of Amsterdam, Netherlands

4:00-4:20 Machine Learning–enabled Scalable Performance Prediction of Scientific Codes

Stephan Eidenbenz and Nandakishore Santhi, Los Alamos National Laboratory, U.S.

4:25-4:45 Automatic Application Performance Data Collection with Caliper and Adiak

David Boehme, Lawrence Livermore National Laboratory, U.S.

 

SIAM PP22 will be a hybrid conference, and many details are still to be sorted out. The full conference program can be viewed at: https://meetings.siam.org/program.cfm?CONFCODE=PP22. Stay tuned for news.

LIKWID 5.2.1 is out!

LIKWID stickersLIKWID 5.2.1 is out! This bugfix release addresses a lot of small and not-so-small issues:

  • Support for Intel Rocket Lake and AMD Zen3 variant (Family 19, Model 0x50)
  • Fix for perf_event multiplexing (important!)
  • Fix for a potential deadlock in MarkerAPI (thx @jenny-cheung)
  • Build and runtime fixes for Nvidia GPU backend, updates for CUDA test codes
  • likwid-bench “peakflops” kernel for ARMv8
  • Updates for AMD Zen1/2/3 event lists and groups
  • Support spaces in MarkerAPI region tags (thx @jrmadsen)
  • Use ‘online’ cpulist instead of ‘present’
  • Check PID if given through –perfpid
  • Intel Icelake: OFFCORE_RESPONSE events
  • likwid-mpirun: Set MPI type for SLURM automatically
  • likwid-mpirun: Fix skip mask for OpenMPI
  • Fix for triad_sve* benchmarks

You can download the new version from the FTP or GitHub.

A riddle in D/A conversion – and its solution

This post is not about high performance computing, for a change; it does, however, have a distinct performance-related vibe, so I think it qualifies for this blog. Besides, I had so much fun solving this riddle that I just cannot keep it to myself, so here goes.

D/A conversion with Arduino and ZN426

I was always interested in digital signal processing, and recently I decided to look into an old drawer and found a couple of ZN426E8 8-bit D/A converters that I had bought in the 80s. I asked myself whether a standard Arduino Uno with its measly 16 MHz ATmega 328 processor would be able to muster enough performance to produce waveforms in the audio range (up to at least a couple of kHz) by copying bytes from a pre-computed array to the D/A converter. The ZN426 itself wouldn’t be a bottleneck in this because it’s just an R-2R ladder with a worst-case settling time of 2 µs, which leads to a Nyquist frequency of 250 kHz. Ample headroom for audio. At 16 MHz, 2 µs means 32 clock cycles, so it shouldn’t really be a problem for the Arduino as well even with compiled code. Or so I thought.

ZN426E8

Fig. 1: The input lines of the D/A converter are numbered with bit 8 being the LSB (image cut from the Plessey data sheet for the ZN426E8)

So I hooked up the digital output lines of port D on the Uno to the input lines of the D/A converter and added the other necessary components according to the minimal example in the data sheet (basically a resistor and a capacitor needed by the internal 2.55 V reference). Unfortunately, the ZN426 data sheet uses an unusual numbering of the bits: bit 1 is the MSB, and bit 8 is the LSB. I missed that and inadvertently mirrored the bits, which led to an “interesting” waveform the first time I fired up the program. Now instead of fixing the circuit, I thought I could do that in software; just mirror the bits before writing them out to PORTD:

// output bytes in out[]
for(int i=0; i<N; i++) {
  unsigned char d=0;
  // mirror the bits in out[i], result in d
  for(int j=0; j<=7; ++j){
    unsigned char lmask = 1 << j, rmask = 128 >> j;
    if(out[i] & lmask) d |= rmask;
  }
  PORTD = out[i];
}

I knew that this was just a quick hack and that one could do it faster, but I just wanted the waveform to look right, and then care for performance. By the way, shift and mask operations should be fast, right? As a first test I wanted to approximate a square wave using Fourier synthesis:

f(t)=\sin t +\frac{1}{3}\sin 3t+\frac{1}{5}\sin 5t+\ldots
Fourier-synthesized square wave (first 20 harmonics) with wrong duty cylcle

Fig. 2: Fourier-synthesized square wave (first 20 harmonics) with wrong duty cycle

The problem

Of course, the resulting function was then shifted and scaled appropriately to map to the range {0,…,255} of the D/A input. As a test, I used 200 data points per period and stopped at 20 harmonics (10 of which were nonzero). On first sight, the resulting waveform looked OK on the scope (see Fig. 2): the usual Gibbs phenomenon near the discontinuities, the correct number of oscillations. However, the duty cycle was not the expected 1:1 but more like 1.3:1. I first assumed some odd programming error, so I simplified the problem by using a single Fourier component, a simple sine wave. Figure 3 shows the result: It was obviously not a correct sine function, since the “arc” at the top is clearly blunter than that at the bottom. So it wasn’t some bug in the Fourier synthesis but already present in the fundamental harmonic.

This sine function is not quite a sine function

Fig. 3: This sine function is not quite a sine function

Bug hunting

In order to track the bug I made some experiments. First I compiled an adapted version of the code on my PC so I could easily inspect and plot the data in the program. It turned out that the math was correct – when plotting the out[] array, I got exactly what was expected, without distortions. Although this was reassuring, it left basically two options: Either it was an “analog” problem, i.e., something was wrong with the linearity of the D/A converter or with its voltage reference, or the timing was somehow off so that data points with high voltages took longer to output.

To check the voltage reference, I tried to spot fluctuations with the scope hooked up to it, but there was nothing – the voltage was at a solid 2.5 V (ish), and there was no discernible AC component visible within the limitations of my scope (a Hameg HM 208, by the way); a noise component of 20 mVpp would have been easily in range. The next test involved the timing: I inserted a significant delay (about 200 µs) after writing the byte to PORTD. This reduced the output frequency quite a bit since the original code spat out one byte about every 20 µs. Lo and behold – the asymmetries were gone. At this point it should have dawned on me, but I still thought it was an analog problem, somehow connected to the speed at which the D/A converter settles to the correct output voltage. However, the specs say that this should take at most 2 µs, so with one new value every 20 µs I should have been fine.

Correct duty cycle after rewiring the D/A input lines

Fig. 4: Correct duty cycle after rewiring the D/A input lines

The solution

In the end, out of sheer desperation I decided to take out the last bit of complexity and rewired the D/A input bits so that the byte-mirror code became obsolete. Figure 4 shows the resulting output voltage for the square wave experiment. There are two major observations here: (1) The duty cycle was now correct at 1:1, and (2) the performance of the code was greatly improved. Instead of one value every 20 µs it now wrote one value every 550 ns (about 9 clock cycles). Problem solved – but why?

The observation that the nonlinearity disappeared when I inserted a delay was the key. In the byte-mirror code above, the more bits in the input byte out[i] are set to one, the more often the if() condition is true and the or-masking of the output byte d takes place. In other words, the larger the number of one-bits, the longer it takes to put together the output byte. And since voltages near the maximum are generated by numbers close to 255 (i.e., with lots of set bits), a phase in the output waveform with a large average voltage used a longer time step than one with a low average voltage. In other words, the time step had a component that was linear in the Hamming weight (or popcount) of the binary output value.

Of course I did some checks to confirm this hypothesis. Rewriting the byte-mirror code without an if condition (instead of rewiring the data lines) fixed the problem as well:

for(int i=0; i<N; i++) {
  unsigned char d=0,lmask=1,rmask=128,v;
  for(unsigned char j=0; j<=7; ++j){
    v = out[i] & lmask;
    d |= (v<<(7-j))>>j;
  }
  PORTD = d;
}

There are many ways to do this, but all resulted in (1) correct output and (2) slow code, more than an order of magnitude slower than the “plain” code with correctly wired data lines. Whatever I tried to mirror the bits, it always took a couple hundred cycles.

In summary, a problem which looked like a nonlinearity in a D/A converter output was actually caused by my code taking longer when more bits were set in the output, resulting in a larger time step for higher voltages.

Odds and ends

At a time step of roughly half a microsecond, the behavior of the D/A converter and the Arduino become visible

Fig. 5: At a time step of roughly half a microsecond, the behavior of the D/A converter and the Arduino become visible

Without mirroring, the time step between successive updates is much smaller than the worst-case settling time of the D/A converter. Figure 5 shows a pure sine function again. One can clearly discern spikes at the points where lots of input bits change concurrently, e.g., at half of the maximum voltage. I don’t know if this is caused by the outputs of the Arduino port being not perfectly synchronous, or if it is an inherent property of the converter. In practice it probably should not matter since there will be some kind of buffer/amplifier/offset compensation circuit that acts as a low-pass filter.

Figure 5 shows yet another oddity: From time to time, the Arduino seems to take a break (there is a plateau at about 75% of the width). This first showed up on the scope as a “ghost image,” i.e., there was a second, shifted version of the waveform on the screen. Fortunately, my Hameg scope has a builtin digital storage option, and after a couple of tries I could capture what is seen in Fig. 5. These plateaus are roughly 6-7 µs in length and turn up regularly every 1.1 ms or so. I checked that this is not something that occurs in between calls to the loop() function; it is really regular, and asynchronous to whatever else happens on the Arduino. I don’t know exactly what it is, but the frequency of these plateaus is suspiciously close to the frequency of the PWM outputs on pins 5 and 6 of the Uno (960 Hz), so it might have something to do with that.

You can download the code of the sketch here: dftsynth.c Remember that this is just a little test code. It only implements  a pure sine transform. I am aware that the Arduino Uno with its limited RAM is not the ideal platform for actually producing audio output.

Please drop me a note if you have any comments or questions.