KONWIHR

Kompetenznetzwerk für wissenschaftliches Höchstleistungsrechnen in Bayern

Inhalt

Seminarvortrag: Algorithms and data structures for matrix-free finite element multigrid operators with MPI-parallel sparse multi-vectors

Course: C++ for Beginners

Date:

February 11-15, 2019, 9:00 – 17:00

Location:

FAU Erlangen-Nürnberg,
Computer Science Building
Martensstr. 3, Room 02.135-113 (2nd floor)
91058 Erlangen, Germany

Summary:

This C++ training is an introductory course on the C++ programming language. The focus of the training is on the introduction of the essential language features and the syntax of C++. Additionally, it introduces many C++ software development principles, concepts, idioms, and best practices, which enable programmers to create professional, high-quality code from the very beginning. The course aims at understanding the core of the C++ programming language, teaches guidelines to develop mature, robust, maintainable, and efficient C++ software, and helps to avoid the most common pitfalls. Attendees should have a grasp of general programming (in any language).

Weiterlesen

Course: Advanced C++ with Focus on Software Engineering

This advanced C++ training is a course on object-oriented (OO) software design with the C++ programming language. The focus of the training are the essential OO and C++ software development principles, concepts, idioms, and best practices, which enable programmers to create professional, high-quality code. The course will not address special areas and applications of C++, such as for instance Template Meta Programming (TMP), or the quirks and curiosities of the C++ language. It rather teaches guidelines to develop mature, robust, and maintainable C++ code.

  • Date: February 28 – March 2, 2018, 9:00 – 17:00
  • Location: FAU Erlangen-Nürnberg, Computer Science Building
    Martensstr. 3
    Room 02.135-113 (2nd floor)
    91058 Erlangen, Germany

This course is made possible by generous support from KONWIHR. There is no course fee for participants from academia and public research facilities. Details (including the agenda and prerequisites for participants) can be found at https://www.lrz.de/services/compute/courses/2018-02-28_hcpa2w17/.

Registration: https://www.lrz.de/services/schulung/course-registration/course-registration.php

Please use the Course ID HCPA2W17.

 

Course: Introduction to hybrid programming in HPC @ LRZ

Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyses the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbour accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.

Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a „how-to“ section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Similar „MPI+X“ tutorials about hybrid programming have been successfully presented by the lecturers Dr. Rolf Rabenseifner (HLRS, member of the steering committee of the MPI-3 Forum) and Dr. Georg Hager (RRZE, winner of the „Informatics Europe Curriculum Best Practices Award: Parallelism and Concurrency“) during various supercomputing conferences in the past.

Teachers: Dr. Georg Hager (RRZE/HPC Uni. Erlangen), Dr. Rolf Rabenseifner (HLRS)
Course: Jan 18, 2018, 10:00-17:00
Registration deadline: Jan 07, 2018

More information about the course and the registration form can be found online on the LRZ and PATC pages: https://www.lrz.de/services/compute/courses/2018-01-18_hhyp1w17/ and http://events.prace-ri.eu/e/LRZ-2018-HHYP1W17

 

Course: Node-Level Performance Engineering @ LRZ

It is our pleasure to announce a retake of our successful two-day short course on „Node-Level Performance Engineering“ on Thursday, Nov 30 – Friday, Dec 1, 2017, 9:00 – 17:00 at the Leibniz Supercomputing Centre in Garching. The course is free of charge as it is sponsored through the PATC program of the Partnership for Advanced Computing in Europe (PRACE). It is an extended version of the very popular regular SC/ISC tutorials by RRZE.

This course teaches performance engineering approaches on the compute node level. „Performance engineering“ as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a „holistic“ node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.

For more information and registration, please see: https://events.prace-ri.eu/event/632/

8. Erlanger International High-End-Computing-Symposium (EIHECS) am 22. Juni 2012

Spitzenforschung ist mehr denn je auf die Möglichkeiten des High-End-Computing angewiesen. Simulationsrechnungen ersetzen immer öfter aufwändige Experimente; komplexe theoretische Modelle sind häufig nur noch in Kombination mit Computerberechnungen sinnvoll nutzbar. Die computergestützte Optimierung von Prozessen und technischen Systemen ist der Schlüssel für die Entwicklung konkurrenzfähiger Produkte für den Weltmarkt. Aber auch in der Medizin, den Wirtschaftswissenschaften oder Geisteswissenschaften, wird High-End-Computing immer öfter als leistungsfähiges Werkzeug erkannt.

Das EIHECS soll auch in diesem Jahr wieder zu einer Bestandsaufnahme des High-End-Computing aus einer internationalen Perspektive beitragen und aktuelle und zukünftige Entwicklungen beleuchten. Das Symposium findet statt

am Freitag, den 22. Juni 2012 von 13:30-17:30 Uhr
im Hörsaal 12 (Cauerstr. 11, 91058 Erlangen)
der Friedrich-Alexander-Universität Erlangen-Nürnberg.

Für die Veranstaltung konnten dieses Jahr fünf international renommierte Sprecher gewonnen werden:

  • Prof. Richard Vuduc (Georgia Tech),
  • Prof. Omar Ghattas (University of Texas)
  • Prof. Franz Durst (FMP Technology GmbH)
  • Prof. Jack Dongarra (University of Tennessee)
  • Prof. Günter Leugering (Universität Erlangen)

Nähere Informationen finden Sie unter http://www.zisc.uni-erlangen.de/veranstaltungen/eihecs-8.shtml

Die Teilnahme ist kostenlos. Um besser planen zu können, wird um eine Anmeldung auf der obigen Webseite gebeten.

Multicore-Technologie-Briefing: fuer Entscheider aus Wissenschaft und Wirtschaft

Das Zentralinstitut für Scientific Computing der Friedrich-Alexander-Universität Erlangen-Nürnberg veranstaltet am Donnerstag, den 13.10.2011, ein Multicore-Technologie-Briefing, das sich an Entscheider aus Wissenschaft und Wirtschaft wendet und einen Überblick zu Hardware- und Softwareaspekten im Multicore-/GPGPU-Zeitalter sowie Fakten und Mythen der Performance-Optimierung gibt.

Für die Veranstaltung ist eine Anmeldung erforderlich.

Weitere Informationen: http://www.zisc.uni-erlangen.de/veranstaltungen/multicore-briefing.shtml

Workshops and Tutorials for High Performance Computing at LRZ

The following LRZ workshops and tutorials for High Performance Computing have been scheduled for autumn 2010 and winter 2010/11:

* Eclipse for C/C++ programming (with a slight Fortran touch), Oct 1, 2010
* Compact course: Iterative linear solvers and parallelization, Oct 4 – Oct 8, 2010
* Advanced Fortran Topics, Oct 11 – Oct 15, 2010
* Einführung in C++ fuer Programmierer Oct 11 – Oct 15, 2010
* Parallel performance Analysis with VAMPIR, Oct 18, 2010
* Introduction to the Usage of High Performance Systems,Remote Visualization and Grid Facilities at LRZ, Oct 20, 2010
* Intel Ct Training, Nov 30 – Dec 1, 2010
* GPGPU Programming, Dec 7 – Dec 9, 2010
* Scientific 3D-Animation with Blender, Jan 13 – Jan 14, 2011
* Introduction to the PGAS languages UPC and CAF, Jan 19, 2011
* Introduction to Molecular Modeling on Supercomputers, Jan 25 – Jan 27, 2011
* Programming with Fortran, Feb 7 – Feb 11, 2011
* Parallel programming with R, Feb 15, 2011
* Visualisation of Large Data Sets on Supercomputers, Feb 23,2011
* Parallel Programming of High Performance Systems, Mar 7 – Mar 11, 2011
* Advanced Topics in High Performance Computing, Mar 21 – Mar 23, 2011

Please consult http://www.lrz.de/services/compute/courses for details.

6th Erlangen International High-End-Computing Symposium

Das Erlangen International High-End-Computing Symposium trägt zu einer Bestandsaufnahme des High-End-Computing aus einer internationalen Perspektive bei und beleuchtet zukünftige Entwicklungen. Für die Veranstaltung konnten auch dieses Jahr wieder vier international renommierte Vortragende gewonnen werden.

Spitzenforschung ist mehr denn je auf die Möglichkeiten des High-End-Computing angewiesen. Simulationsrechnungen ersetzen immer öfter aufwändige Experimente; komplexe theoretische Modelle sind häufig nur noch in Kombination mit Computerberechnungen sinnvoll nutzbar. Die computergestützte Optimierung von Prozessen und technischen Systemen ist der Schlüssel für die Entwicklung konkurrenzfähiger Produkte für den Weltmarkt. Aber auch in der Medizin, den Wirtschaftswissenschaften oder Geisteswissenschaften, wird High-End-omputing immer öfter als leistungsfähiges Werkzeug erkannt. Das 6th Erlangen International High-End-Computing Symposium (EIHECS) soll auch in diesem Jahr wieder zu einer Bestandsaufnahme des High-End-Computing aus einer internationalen Perspektive eitragen und aktuelle und zukünftige Entwicklungen beleuchten.

Das Symposium findet statt
am Freitag, den 04. Juni 2010 von 10:00-14:00 Uhr
im Hörsaal 4 (Martensstr. 1, Erlangen)
am Regionalen Rechenzentrum Erlangen
der Friedrich-Alexander-Universität Erlangen-Nürnberg

Für die Veranstaltung konnten auch dieses Jahr wieder vier international renommierte Vortragende gewonnen werden. Nähere Informationen finden Sie unter http://www10.informatik.uni-erlangen.de/de/Misc/EIHECS6/

Die Teilnahme ist kostenlos. Um planen zu können, bitten wir dennoch um eine Anmeldung auf der obigen Webseite.

Intel Ct tutorial at RRZE

Intel has kindly agreed to give a tutorial about their new parallel programming model „Ct“. The tutorial will be conducted on Friday, April 16th, 2010, 9:15-12:00 at RRZE (the room will be announced shortly before). If you want to attend, please send email to
hpc@rrze,uni-erlangen.de.

Note that this is not a finished product, and there is not even a public beta release yet. Hence you will be most interested in this presentation if you work in the field of programming languages or parallel programming models.

Abstract

Intel Ct Technology is a high-level descriptive programming model for data-parallel programming. It strives to simplify efficient parallelization of computations over large data sets. Programmers no longer focus on the implementation details of their data-parallel program, but instead express a program’s algorithms in terms of operations on data. Ct’s deterministic semantics avoid race conditions and deadlocks and enable use for both rapid prototyping and production-stable codes.

Ct hides the complexity of mapping the high-level description of the program’s operations by employing JIT compilation techniques. Its internal JIT compiler dynamically optimizes a program to whatever hardware is used for execution, automatically emitting vectorized and multi-threaded code. With Ct’s JIT compiler it becomes possible to execute the program on multiple computing platforms (e.g Intel® SSE, Intel AVX) without recompiling the application. Ct’s JIT is key to support upcoming execution environments without the need to recompile a program: replacing the Ct library suffices to enable future platforms.

In this tutorial, we introduce to the participants the programming model and the execution environment of Intel Ct Technology. We provide an in-depth guide to the basic building blocks of the Ct language: scalar types, dense and sparse vector data types and vector operations. We present Ct’s way to control an application’s control flow and to utilize different levels of abstraction. Based on real-world scientific codes and other examples, we then show how to construct data-parallel algorithms from these basic building blocks. We demonstrate how to smoothly move an existing sequential code base to a parallel code base. In addition, we illustrate how to utilize external libraries such as the Intel® Math Kernel Library. We close the tutorial with a live demonstration of performance and scalability analysis as well as performance optimization of Ct applications.

Presenter: Michael Klemm, Senior Application Engineer, Intel, Software and Solutions Group

Biographical Information

Since 2008, Michael Klemm is part of Intel’s Software and Services Group, Developer Relations Division. His focus is on High Performance & Throughput Computing Enabling. Michael obtained an M.Sc. with honors in Computer Science in 2003. He received a Doctor of Engineering degree (Dr.-Ing.) in Computer Science from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany. Michael’s research focus was on compilers and runtime optimizations for distributed systems. His areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning. Michael is a member of ACM and IEEE, and is an active member of the OpenMP Language Committee.