Together with Christie Alappat and Gerhard Wellein, I am organizing a two-part minisymposium at SIAM Parallel Processing 2024 in Baltimore, MD (full conference program available), on March 7, 2024. It is titled “Advancements in Sparse Linear Algebra: Hardware-Aware Algorithms and Optimization Techniques.” This is the abstract:
Over the last decade, the landscape of computer architecture has undergone tremendous transformations. At the same time, the field of sparse linear algebra (LA) has experienced a resurgence in interest, witnessing the emergence of novel algorithms and the revival of traditional ones. The irregular access patterns inherent in sparse LA often pose significant challenges for efficient execution on highly parallel modern computing devices. This minisymposium delves into diverse algorithmic and programming techniques that address these challenges. We will explore various methods for enhancing the computational intensity and node-level performance of sparse algorithms, along with approaches to improve their scalability. The topics covered encompass mixed precision computation, batched solvers, methods for reducing or hiding communication, cache blocking, and the efficient utilization of parallel paradigms. Naturally, the implementation of some of these techniques is not straightforward and may necessitate algorithmic reformulation. The discussions will shed light on this aspect while also examining their applications, benefits, and limitations. The primary objective is to present an overview of cutting-edge sparse LA techniques that effectively leverage hardware capabilities.
Here’s an overview of the agenda:
11:00-11:25 Accelerating Sparse Solvers with Cache-Optimized Matrix Power Kernels abstract
Christie Louis Alappat, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
11:30-11:55 Efficient Schwarz Preconditioning Techniques on Current Hardware Using FROSch abstract
Alexander Heinlein, Delft University of Technology, Netherlands; Sivasankaran Rajamanickam and Ichitaro Yamazaki, Sandia National Laboratories, U.S.
12:00-12:25 Sparse Algorithms for Large-Scale Bayesian Inference Problems abstract
Lisa Gaedke-Merzhäuser and Olaf Schenk, Università della Svizzera italiana, Switzerland
12:30-12:55 Communication-Reduced Sparse-Dense Matrix Multiplication with Adaptive Parallelization abstract
Hua Huang and Edmond Chow, Georgia Institute of Technology, U.S.
3:45-4:10 Residual Inverse Formulation of the Feast Eigenvalue Algorithm Using Mixed-Precision and Inexact System Solves abstract
Eric Polizzi, University of Massachusetts, Amherst, U.S.
4:15-4:40 How Mixed Precision Can Accelerate Sparse Solvers abstract
Hartwig Anzt, University of Tennessee, U.S.; Terry Cojean, Pratik Nayak, Thomas Gruetzmacher, Yu-Hsiang Mike Tsai, Marcel Koch, Tobias Ribizel, Fritz Goebel, and Gregor Olenik, Karlsruhe Institute of Technology, Germany
4:45-5:10 Algebraic Programming for High Performance Auto-Parallelised Solvers abstract
Albert Jan N. Yzelman, Denis Jelovina, Aristeidis Mastoras, Alberto Scolari, and Daniele Giuseppe Spampinato, Huawei Technologies Switzerland AG, Switzerland
5:15-5:40 Pipelined Sparse Solvers: Can More Reliable Computations Help Us to Converge Faster? abstract
Roman Iakymchuk, Umeå University and Uppsala University, Sweden