Georg Hager's Blog

Suche


ISC15 Workshop on “Performance Modeling:Methods and Applications”

Posted on by Georg Hager

This year’s International Supercomputer Conference (ISC15) in Frankfurt is going to host workshops for the first time. Luckily, our proposal was accepted so there will be a workshop on “Performance Modeling: Methods and Applications,” featuring many renowned experts on performance modeling techniques. Bill Gropp has agreed to present the keynote talk, and we aim at a broad overview of the field that should be suitable for an audience with a wide spread of prior knowledge. So if you plan to visit ISC, July 16 is the date! See the ISC15 Agenda Planner – Performance Modeling Workshop for more information.

Fooling the masses – Stunt 15: Play mysterious!

Posted on by Georg Hager

(See the prelude for some general information on what this is all about)

Do you know that feeling when you think to yourself “What on earth did they do there?” when reading a paper? After some depressing hours of trying to figure out how those guys got their results, you just give up, shrug, and cite them in your related work section because everybody else does. “My goodness” you think, “I wish I knew how to do this myself.” Here are some ideas.

Make it as hard as possible for others to figure out what you actually did, so that they can’t compare your results to theirs right away. A popular strategy is to present problem sizes, i.e., the “amount of work,” separately (on a different page) from performance results, which in turn should be given in seconds (time to solution). This will force your audience to figure out the conventional “work/time” performance metric by themselves; many will give up and just believe your statement that you are “faster than X.” It will also be harder to compare with performance models, which is a good thing if you’re off the model by some large factor. Runtime is not the only effective metric; anything unusual will do, such as “microflops per picosecond” or “cache misses per core hour.” Combine with stunt 12 as needed.

Test case problem size
[grid points]
# Iterations Runtime [s]
car see page 4 500  2.34521
plane see page 6 sufficient  3.14159
train see page 2 roughly 112  0.11991
chicken see page 12 whatever  42.0

It’s really useful to be secretive about the exact hardware you were using. “eight-core AMD Opteron” or “six-core Intel Xeon” will usually suffice as a description. You may throw in the clock speed for good measure, but don’t mention whether you have activated Turbo Mode or not, or whether you used one socket or two, or how you enforced affinity. On the other hand, never forget to specify the exact Linux distribution and kernel version! No performance paper will be complete without it. We don’t want to hide anything after all, right?

Talking about openness: leave a good impression by making your software framework available for download. But when you do, take care to make it impossible to actually run it. Forge a totally convoluted, mind-twisting build system. Put in dependencies to several huge Java frameworks with version numbers far into the past or the future (requiring gcc 2.95.2 will work fine, too!). Never write plain C code, always generate it, using popular languages such as Logo, Java2kWhitespace, or Brainfuck (pick any two). Because: what they can’t run, they can’t criticize! (Think “Stupida Mouse!”) Give your software framework a fancy, foreign-sounding name, preferably from a romance language. Acronyms are so 1980s1, but “Inocentada,” “Vanitas,” “Nebulosità,” or “Ordure” will really cut the mustard.

If your strategy works you can even afford to be bold in your abstract: “Our framework achieves up to 34% more performance than previous work.” Since nobody can refute that claim due to your bleeding-edge camouflage tactics, you have laid the foundation of what will be known in a few years as “seminal work.”


1 So I was told by Julian

Attendee survey results from the SC14 “Code Optimization War Stories” BoF

Posted on by Georg Hager

At SC14 in New Orleans we (Jan Treibig and I, with help by Thomas William from TU Dresden) conducted, for the first time, a BoF with the title “Code Optimization War Stories.” Its purpose was to share and discuss several interesting and (hopefully) surprising ventures in code optimization. Despite the fact that we were accused by an anonymous reviewer of using “sexist language” that was “off-putting and not welcoming” (the offending terms in our proposal being “man-hours” and “war stories”), the BoF sparked significant interest with some 60 attendees. In parallel to the talks and discussions, participants were asked to fill out a survey about several aspects of their daily work, which programming languages and programming models they use, how they approach program optimization, whether they use tools, etc. The results are now available after I have finally gotten around to typing in the answers given on the paper version; fortunately, more than half of the people went for the online version. Overall we had 29 responses. Download the condensed results here:  CodeOptimizationWarStories_SurveyResults.pdf

The results are, in my opinion, partly surprising and partly expected. Note that there is an inherent bias here since the audience was composed of people specifically interested in code optimization (according to the first survey question, they were mostly developers and domain scientists); believe it or not, such a a crowd is by no means a representative sample of all attendees of SC14, let alone the whole HPC community. The following observations should thus be taken with a grain of salt. Or two.


Management summary: In short, people mostly write OpenMP and MPI-parallel programs in C, C++, or Fortran. The awareness of SIMD is surprisingly strong, which is probably why single-core optimization is almost as popular as socket, node, or massively parallel tuning.  x86 is the dominant architecture, time to solution is the most important target metric, and hardly anyone cares about power dissipation. The most popular tools are the simple ones: gprof and friends mostly, supported by some hardware metrics. Among those, cache misses are still everybody’s darling. Among the few who use performance models at all, it’s almost exclusively Roofline.

lang.jpg

What is the programming language that you mostly deal with when optimizing code?

Here are the details:

  • Which area of science does the code typically come from that you work on? Physics leads the pack (59%)  here, followed by maths, CFD, and chemistry.
  • Programming languages. Fortran is far from dead: 52% answered “Fortran 90/95″ when asked for the programming languages they typically deal with. Even Fortran 77 is still in the game with 24%. However, C++98 (69%) and C (66%) are clearly in the lead. For me it was a little surprising that C++11 almost matches Fortran at 45% already. However, this popularity is not caused by the new threading features (see next item).
  • Typical parallel programming models. Most people unsurprisingly answered “OpenMP” or “MPI” (both 76%). The unexpected (at least in my view) runner-up with 69% is SIMD, which is good since it finally seems to get the attention it deserves! CUDA landed at a respectable 31%, and C++11 threads trail behind at 14%, beaten even by the good old POSIX threads (17%). The PGAS pack (GASPI, Global Arrays, Co-Array Fortran, UPC) hits rock bottom, being less popular than Java threads and Cilk (7% and 10%, respectively). OpenCL struggles at a mere 14%. If it weren’t for this “grain of salt” thing (see above), one might be tempted to spot a few sinking ships here…
  • Do you optimize for single core, socket/node, or highly parallel? Code optimization efforts were almost unanimously reported to revolve around the highly parallel (69%) or socket/node level (76%), but 52% also care about single-core issues. The latter does not quite match the 69% who say they write SIMD-parallel code, which leaves some room for speculation. Anyway it is good to see that some people still don’t lose sight of the single core even in a highly parallel world.

    pmodels.jpg

    What is the parallel programming model that you mostly deal with when optimizing code?

  • Computer architectures. x86-based systems are clearly in the lead (90%) in terms of target architectures, because there’s hardly anything left. IBM Blue Gene (7%) and Cray MPPs (3%) came out very low, which is entirely expected: although big machines do get a lot of media attention, most people (have to) deal with the commodity stuff under their desks and in local computing centers. Xeon Phi (34%) and Nvidia GPUs (31%) go almost head to head; a clear success for Intel’s aggressive marketing strategy.
  • Typical optimization target metrics. This was a tricky one since many metrics overlap significantly, and it is not possible to use one and not touch the other. Still it is satisfying that low time to solution comes out first (72%), followed by speedup (59%) and high work/time ratio (52%). It would be interesting to analyze whether there is a correlation or an anti-correlation between these answers across the attendees; maybe I should have added “-O0″ as another optimization option in the next question 😉 Low power and/or low energy, although being “pivotal today” (if you believe the umpteenth research paper on the subject) are just slightly above radar level. Probably there’s just not enough pressure from computing centers, or probably there weren’t enough attendees from countries where electric energy isn’t almost free, who knows?
  • Optimization strategies. No surprises here: 90% try to find a good choice of compiler options, and standard strategies like fixing load imbalance, doing loop transformations, making the compiler understand the code better, and optimizing communication patterns and volume, etc., are all quite popular. Autotuning methods mark the low end of the spectrum. It seems that the state-of-the-art autotuning frameworks haven’t made it into the mainstream yet.
  • Approaches for saving energy (or related). The very few people who actually did some energy-related optimization have understood that good performance is the lowest-order effect in getting good energy efficiency. Hardly anyone tried voltage and/or frequency scaling for saving energy. This may have to do with the fact that few centers allow users to play with these parameters, or provide well-defined user interfaces. Btw, since release 3.1.2 of the LIKWID tools we support likwid-setFrequencies, a simple program which allows you to set the CPU clock speed from the shell.
  • tools.jpg

    If you have used performance tools, which ones?

    Popular performance tools. 76% answered that they had used some tool for optimizing application code at least once. However, although optimizing highly parallel applications is quite popular (see above), the typical parallel tools such as Vampir or Scalasca are less fashionable than expected (both 10%, even behind Allinea MAP), and  TAU is only used by 28%. On the other hand, good old gprof or similar compiler-based facilities still seem to be very important (48%). There were some popular tools I hadn’t thought of (unforgivably) when putting together the survey, and they ended up in the “other” category: Linux perf, valgrind, HPC Toolkit. The whole picture shows that people do deal with single-thread/node issues a lot.  LIKWID deserves a little more attention, though :-)

  • What are the performance tools used for? I put in “producing colorful graphs” as a fun option, but 21% actually selected it! Amazing, they must have read my “Show data! Plenty.” blog post. Joke aside, 76% and 66%, respectively, reported that they use tools for identifying bottlenecks and initial code assessment. Only very few employ tools for validating performance models, which is sad but will hopefully change in the future.
  • What are the most popular hardware performance events? Ahead even of clock cycles (31%) and Flops (41%), cache misses are the all-time winner with 55%. This is not unexpected, cache misses being the black plague of HPC and all. Again, somewhat surprisingly, SIMD-related metrics are strong with 28%. Hardly anyone looks at cross-NUMA domain traffic, and literally nobody uses hardware performance counters to study code balance (inverse computational intensity).
  • Do you use performance models in your optimization efforts? The previous two questions already gave a hint towards the outcome of this one:  41% answered “No,” 21% answered “Yes,” and 24% chose “What is this performance modeling stuff anyway?” Way to go.
  • Which performance model(s) do you use? Among those who do use performance models, Roofline is (expectedly) most popular by far, although there’s not much statistics with six voters.
  • The remaining questions have dealt with how useful it would be to have a dedicated “code optimization community” (whatever that means in practice), which received almost unanimous approval, and how attendees rated the BoF as a whole. 62% want to see more of this stuff in the future, which we interpret as encouraging. Two people even considered it the “best BoF ever.” So there.

As I wrote above, there is a strong bias in these answers, and the results are clearly not representative of the HPC community. But if we accept that those who attended the BoF are really interested in optimizing application code, the conclusion must be that recent developments in tools and models trickle into this community at a very slow rate.

 

SC14 Intel Parallel Universe Computing Challenge: The Squad fails in the semi-final

Posted on by Georg Hager

All good things must end. In a heroic battle the Gaussian Elimination Squad was beaten by the “Brilliant Dummies” from Seoul University. We were quite a bit ahead after the trivia round, but the coding challenge was a real neck-breaker. We couldn’t even find the spot where one should start leveraging parallelism, and we ended up with a disappointing speedup of just above 1.

The Squad congratulates the Korean team, it was a great fight! We hope to return next year to SC15 in Austin, Texas.

 

SC14 Parallel Universe Computing Challenge Match #1: The Squad smashes the “Invincible Buckeyes”

Posted on by Georg Hager

At 8pm Central Time today, the Gaussian Elimination Squad fought their first round of the Intel Parallel Universe Computing Challenge against the “Invincible Buckeyes,” representing the Ohio Supercomputing Center and Ohio State University, at the SC14 conference. I have to admit that I needed to look up “buckeye” in the dictionary, where I learned that “buckeye” is a tree (going by the scientific name “Aesculus“, also known as the “chestnut”) as well as a nickname for residents of the U.S. state of Ohio. Nice pun, but no match for the kick-ass, brilliancy-radiating nom de guerre of the German team!

Just as last year, this is a single-elimination tournament, with eight teams fighting one-on-one in seven matches. Each match consists of two parts. The first is a trivia round, with questions about computing, parallelism, the history of supercomputing, and the SC conference series. The faster you select the correct answer (out of four), the more points you collect. After the trivia we had a slight advantage, but there was no reason to be over-confident. In the second part, the coding challenge, we had ten minutes to parallelize and optimize a piece of code (that we hadn’t seen before, of course). In a team effort without equal we managed to get a mind-boggling 240x performance speedup compared to the original version on the Intel Xeon Phi! The Buckeyes were stuck at something like 6x. Hence, this round goes to The Squad, and we are eagerly looking forward to the semi-finals on Wednesday. 

Team members for “Gaussian Elimination Squad” now official

Posted on by Georg Hager

In case you haven’t heard it yet: The “Gaussian Elimination Squad” will defend their title in the “Intel Parallel Universe Computing Challenge” at SC14 in New Orleans. We have finally put together the team! Here it is:

  • Christian Terboven (University of Aachen IT Center)
  • Michael Ott (Leibniz Supercomputing Center)
  • Guido Juckeland (Technical University of Dresden)
  • Michael Kluge (Technical University of Dresden)
  • Christian Iwainsky (Technical University of Darmstadt)
  • Jan Treibig (Erlangen Regional Computing Center)
  • Georg Hager (Erlangen Regional Computing Center)

The teams all get their share of attention now at Intel’s website. AFAIK they are still looking for submissions, so take your chance to get thrashed! Figuratively speaking, of course.

“Gaussian Elimination Squad” to fight again at SC14

Posted on by Georg Hager

Intel has decided to do it again: There will be another “Parallel Universe Computing Challenge” at the  SC14 conference in New Orleans, Louisiana (Nov 16-21, 2014). The German team, the “Gaussian Elimination Squad,” glorious winner of last year’s challenge,  will fight to defend their title! Read about it in my interview.

Blaze 2.1 has been released

Posted on by Georg Hager

Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic based on Smart Expression Templates. Just in time for ISC’14, Blaze 2.1 has been released. The focus of this latest release is the improvement and extension of the shared memory parallelization capabilities: In addition to OpenMP parallelization, Blaze 2.1 now also enables shared memory parallelization based on C++11 and Boost threads.  Additionally, the underlying architecture for the parallel execution and automatic, context-sensitive optimization has been significantly improved. This not only improves the performance but also allows for an efficient parallel handling of computations involving block-structured vectors and matrices.

All of the Blaze documentation and code can be downloaded at https://code.google.com/p/blaze-lib/.

Bill Gropp on “Engineering for Performance in High Performance Computing”

Posted on by Georg Hager

At the PASC14 conference in Zurich, Bill Gropp gave a great talk on Performance Engineering, the way it should be done. 100% along the lines of what we try to teach people in our tutorials! https://www.youtube.com/watch?v=sadfSARXSC0

Gaussian Elimination Squad wins Intel Parallel Universe Computing Challenge at SC13!

Posted on by Georg Hager

PUCC-bracket

Final team bracket for the Intel Parallel Universe Computing Challenge

We’ve made it! The final round against the “Coding Illini” team, comprising members from the University of Illinois and the National Center for Supercomputing Applications (NCSA), took place on Thursday, November 21, at 2pm Mountain Time. We were almost 30% ahead of the other team after the trivia, and managed to extend that head start in the coding challenge, which required the parallelization of a histogram computation. This involved quite a lot of typing, since OpenMP does not allow private clauses and reductions for arrays in C++. My request for a standard-sized keyboard was denied so we had to cope with a compact one, which made fast typing a little hard. As you can see in the video, a large crowd had gathered around the Intel booth to cheer for their teams.

Remember, each match consisted of two parts: The first was a trivia challenge with 15 questions about computing, computer history, programming languages, and the SC conference series. We were given 30 seconds per question, and the faster we logged in the correct answer (out of four), the more points we got. The second part was a coding challenge, which gave each team ten minutes to speed up a piece of code they had never seen before as much as possible. On top of it all, the audience could watch what we were doing on two giant screens.

The Gaussian Elimination Squad

The Gaussian Elimination Squad: (left to right) Michael Kluge, Michael Ott, Joachim Protze, Christian Iwainsky, Christian Terboven, Guido Juckeland, Damian Alvarez, Gerhard Wellein, and Georg Hager (Image: S. Wienke, RWTH Aachen)

In the first match against the Chinese “Milky Way” team, the code was a 2D nine-point Jacobi solver; an easy task by all means. We did well on that, with the exception that we didn’t think of using non-temporal stores for the target array, something that Gerhard and I had described in detail the day before in our tutorial! Fortunately, the Chinese had the same problem and we came out first. In the semi-final against Korea, the code was a multi-phase lattice-Boltzmann solver written in Fortran 90. After a first surge of optimism (after all, LBM has been a research topic in Erlangen for more than a decade) we saw that the code was quite complex. We won, but we’re still not quite sure whether it was sheer luck (and the Korean team not knowing Fortran). Compared to that, the histogram code in the final match was almost trivial apart from the typing problem.

The back image on our shirts

According to an enthusiastic spectator we had “the nerdiest team name ever!”

After winning the challenge, the “Gaussian Elimination Squad” proudly presented a check for a $25,000 donation to the benefit of the Philippines Red Cross.

I think that all teams did a great job, especially when taking into account the extreme stress levels caused by the insane time limit, and everyone watching the (sometimes not-so-brilliant) code that we wrote! It’s striking that none of the US HPC leadership facilities made it to the final match – they all dropped out in the first round (see the team bracket above).

The Gaussian Elimination Squad represented the German HPC community, with members from RWTH Aachen (Christian Terboven and Joachim Protze), Jülich Supercomputing Center (Damian Alvarez), ZIH Dresden (Michael Kluge and Guido Juckeland), TU Darmstadt (Christian Iwainsky), Leibniz Supercomputing Center (Michael Ott), and Erlangen Regional Computing Center (Gerhard Wellein and Georg Hager). Only four team members were allowed per match, but the others could help by shouting advice and answers they thought were correct.

 

Nach oben