Georg Hager's Blog

Random thoughts on High Performance Computing

Content

Fooling the masses – Stunt 16: Worship the God of Automation!

(See the prelude for some general information on what this is all about)

Developing a true understanding of a problem by thorough analysis is so 19th century. When there were no computers, people had to actually sit down with paper and pencil (quill?) and use their brains to figure out stuff. How old-fashioned and boring.

Automation.png

Figure 1: In automation, plugging one framework into the next is pivotal. Why not reuse what you already have?

With the advent of computer science we have added substantial momentum to automation. And it’s not just about automating tedious computations, but also about day-to-day tasks such as filtering information, visualizing data, and (now imagine a heroic fanfare in the background) the process of thinking itself!  With automation we don’t have to care about the validity, usefulness, or tenability of our automatically generated results, because understanding or insight was not asked for. The point is that “stuff” drops out of some program. What else could we wish for?

In high performance computing we find a particularly strong case for automation. Parallel computers are so complicated. Code is complicated. Compilers are complicated. The way performance is composed of all those little bottlenecks in the machine is complicated. So let’s design a machine (i.e., a program) that takes all this intricacy and digests it. This usually means we have to produce and interpret lots of data, but computers are good at crunching data, too! So why not use a fancy data analysis framework to dig out the information? It’s all in the data, so the machine should be able to see through it.

This is all easier than one might think. Do what tool developers do all the time: Build a new meta-tool from all the analysis tools you can lay your hands on (see figure). Then take a bunch of source codes, feed them into your new tool, and draw a diagram that shows in fancy colors that L3 misses and long network latencies are correlated with low performance and high energy consumption. Et voilà: real science!

Of course there are those reactionary die-hards who are still fostering the brain-quill-and-paper approach. But this is the age of cybernation1! Steal their thunder by repeating the computer science mantra: “Can this be automated? This can be automated. Automate this!” (a little rocking back and forth with eyes closed will emphasize the religious dimensions of your message).

If they are still obstinate, throw random pieces of Haskell code at them. That should teach them a lesson.

1 I love the similarity between cybernation and hibernation. It’s cyberlarious.

Preview for SC15 tutorial on “Node-Level Performance Engineering” now available

SC15 solicits video previews of accepted tutorials for the first time this year. So watch the commercial for our SC15 full-day tutorial “Node-Level Performance Engineering“!

Kudos to Jörn Rüggeberg from the RRZE multimedia center for putting together this great piece of art.

Fooling the masses – Stunt 15: Play mysterious!

(See the prelude for some general information on what this is all about)

Do you know that feeling when you think to yourself “What on earth did they do there?” when reading a paper? After some depressing hours of trying to figure out how those guys got their results, you just give up, shrug, and cite them in your related work section because everybody else does. “My goodness” you think, “I wish I knew how to do this myself.” Here are some ideas.

Make it as hard as possible for others to figure out what you actually did, so that they can’t compare your results to theirs right away. A popular strategy is to present problem sizes, i.e., the “amount of work,” separately (on a different page) from performance results, which in turn should be given in seconds (time to solution). This will force your audience to figure out the conventional “work/time” performance metric by themselves; many will give up and just believe your statement that you are “faster than X.” It will also be harder to compare with performance models, which is a good thing if you’re off the model by some large factor. Runtime is not the only effective metric; anything unusual will do, such as “microflops per picosecond” or “cache misses per core hour.” Combine with stunt 12 as needed.

Test case problem size
# Iterations Runtime [s]
car see page 4 500  2.34521
plane see page 6 sufficient  3.14159
train see page 2 roughly 112  0.11991
chicken see page 12 whatever  42.0

It’s really useful to be secretive about the exact hardware you were using. “eight-core AMD Opteron” or “six-core Intel Xeon” will usually suffice as a description. You may throw in the clock speed for good measure, but don’t mention whether you have activated Turbo Mode or not, or whether you used one socket or two, or how you enforced affinity. On the other hand, never forget to specify the exact Linux distribution and kernel version! No performance paper will be complete without it. We don’t want to hide anything after all, right?

Talking about openness: leave a good impression by making your software framework available for download. But when you do, take care to make it impossible to actually run it. Forge a totally convoluted, mind-twisting build system. Put in dependencies to several huge Java frameworks with version numbers far into the past or the future (requiring gcc 2.95.2 will work fine, too!). Never write plain C code, always generate it, using popular languages such as Logo, Java2kWhitespace, or Brainfuck (pick any two). Because: what they can’t run, they can’t criticize! (Think “Stupida Mouse!”) Give your software framework a fancy, foreign-sounding name, preferably from a romance language. Acronyms are so 1980s1, but “Inocentada,” “Vanitas,” “Nebulosità,” or “Ordure” will really cut the mustard.

If your strategy works you can even afford to be bold in your abstract: “Our framework achieves up to 34% more performance than previous work.” Since nobody can refute that claim due to your bleeding-edge camouflage tactics, you have laid the foundation of what will be known in a few years as “seminal work.”


1 So I was told by Julian

Attendee survey results from the SC14 “Code Optimization War Stories” BoF

At SC14 in New Orleans we (Jan Treibig and I, with help by Thomas William from TU Dresden) conducted, for the first time, a BoF with the title “Code Optimization War Stories.” Its purpose was to share and discuss several interesting and (hopefully) surprising ventures in code optimization. Despite the fact that we were accused by an anonymous reviewer of using “sexist language” that was “off-putting and not welcoming” (the offending terms in our proposal being “man-hours” and “war stories”), the BoF sparked significant interest with some 60 attendees. In parallel to the talks and discussions, participants were asked to fill out a survey about several aspects of their daily work, which programming languages and programming models they use, how they approach program optimization, whether they use tools, etc. The results are now available after I have finally gotten around to typing in the answers given on the paper version; fortunately, more than half of the people went for the online version. Overall we had 29 responses. Download the condensed results here:  CodeOptimizationWarStories_SurveyResults.pdf

The results are, in my opinion, partly surprising and partly expected. Note that there is an inherent bias here since the audience was composed of people specifically interested in code optimization (according to the first survey question, they were mostly developers and domain scientists); believe it or not, such a a crowd is by no means a representative sample of all attendees of SC14, let alone the whole HPC community. The following observations should thus be taken with a grain of salt. Or two.


Management summary: In short, people mostly write OpenMP and MPI-parallel programs in C, C++, or Fortran. The awareness of SIMD is surprisingly strong, which is probably why single-core optimization is almost as popular as socket, node, or massively parallel tuning.  x86 is the dominant architecture, time to solution is the most important target metric, and hardly anyone cares about power dissipation. The most popular tools are the simple ones: gprof and friends mostly, supported by some hardware metrics. Among those, cache misses are still everybody’s darling. Among the few who use performance models at all, it’s almost exclusively Roofline.

lang.jpg

What is the programming language that you mostly deal with when optimizing code?

Here are the details:

  • Which area of science does the code typically come from that you work on? Physics leads the pack (59%)  here, followed by maths, CFD, and chemistry.
  • Programming languages. Fortran is far from dead: 52% answered “Fortran 90/95” when asked for the programming languages they typically deal with. Even Fortran 77 is still in the game with 24%. However, C++98 (69%) and C (66%) are clearly in the lead. For me it was a little surprising that C++11 almost matches Fortran at 45% already. However, this popularity is not caused by the new threading features (see next item).
  • Typical parallel programming models. Most people unsurprisingly answered “OpenMP” or “MPI” (both 76%). The unexpected (at least in my view) runner-up with 69% is SIMD, which is good since it finally seems to get the attention it deserves! CUDA landed at a respectable 31%, and C++11 threads trail behind at 14%, beaten even by the good old POSIX threads (17%). The PGAS pack (GASPI, Global Arrays, Co-Array Fortran, UPC) hits rock bottom, being less popular than Java threads and Cilk (7% and 10%, respectively). OpenCL struggles at a mere 14%. If it weren’t for this “grain of salt” thing (see above), one might be tempted to spot a few sinking ships here…
  • Do you optimize for single core, socket/node, or highly parallel? Code optimization efforts were almost unanimously reported to revolve around the highly parallel (69%) or socket/node level (76%), but 52% also care about single-core issues. The latter does not quite match the 69% who say they write SIMD-parallel code, which leaves some room for speculation. Anyway it is good to see that some people still don’t lose sight of the single core even in a highly parallel world.

    pmodels.jpg

    What is the parallel programming model that you mostly deal with when optimizing code?

  • Computer architectures. x86-based systems are clearly in the lead (90%) in terms of target architectures, because there’s hardly anything left. IBM Blue Gene (7%) and Cray MPPs (3%) came out very low, which is entirely expected: although big machines do get a lot of media attention, most people (have to) deal with the commodity stuff under their desks and in local computing centers. Xeon Phi (34%) and Nvidia GPUs (31%) go almost head to head; a clear success for Intel’s aggressive marketing strategy.
  • Typical optimization target metrics. This was a tricky one since many metrics overlap significantly, and it is not possible to use one and not touch the other. Still it is satisfying that low time to solution comes out first (72%), followed by speedup (59%) and high work/time ratio (52%). It would be interesting to analyze whether there is a correlation or an anti-correlation between these answers across the attendees; maybe I should have added “-O0” as another optimization option in the next question 😉 Low power and/or low energy, although being “pivotal today” (if you believe the umpteenth research paper on the subject) are just slightly above radar level. Probably there’s just not enough pressure from computing centers, or probably there weren’t enough attendees from countries where electric energy isn’t almost free, who knows?
  • Optimization strategies. No surprises here: 90% try to find a good choice of compiler options, and standard strategies like fixing load imbalance, doing loop transformations, making the compiler understand the code better, and optimizing communication patterns and volume, etc., are all quite popular. Autotuning methods mark the low end of the spectrum. It seems that the state-of-the-art autotuning frameworks haven’t made it into the mainstream yet.
  • Approaches for saving energy (or related). The very few people who actually did some energy-related optimization have understood that good performance is the lowest-order effect in getting good energy efficiency. Hardly anyone tried voltage and/or frequency scaling for saving energy. This may have to do with the fact that few centers allow users to play with these parameters, or provide well-defined user interfaces. Btw, since release 3.1.2 of the LIKWID tools we support likwid-setFrequencies, a simple program which allows you to set the CPU clock speed from the shell.
  • tools.jpg

    If you have used performance tools, which ones?

    Popular performance tools. 76% answered that they had used some tool for optimizing application code at least once. However, although optimizing highly parallel applications is quite popular (see above), the typical parallel tools such as Vampir or Scalasca are less fashionable than expected (both 10%, even behind Allinea MAP), and  TAU is only used by 28%. On the other hand, good old gprof or similar compiler-based facilities still seem to be very important (48%). There were some popular tools I hadn’t thought of (unforgivably) when putting together the survey, and they ended up in the “other” category: Linux perf, valgrind, HPC Toolkit. The whole picture shows that people do deal with single-thread/node issues a lot.  LIKWID deserves a little more attention, though 🙂

  • What are the performance tools used for? I put in “producing colorful graphs” as a fun option, but 21% actually selected it! Amazing, they must have read my “Show data! Plenty.” blog post. Joke aside, 76% and 66%, respectively, reported that they use tools for identifying bottlenecks and initial code assessment. Only very few employ tools for validating performance models, which is sad but will hopefully change in the future.
  • What are the most popular hardware performance events? Ahead even of clock cycles (31%) and Flops (41%), cache misses are the all-time winner with 55%. This is not unexpected, cache misses being the black plague of HPC and all. Again, somewhat surprisingly, SIMD-related metrics are strong with 28%. Hardly anyone looks at cross-NUMA domain traffic, and literally nobody uses hardware performance counters to study code balance (inverse computational intensity).
  • Do you use performance models in your optimization efforts? The previous two questions already gave a hint towards the outcome of this one:  41% answered “No,” 21% answered “Yes,” and 24% chose “What is this performance modeling stuff anyway?” Way to go.
  • Which performance model(s) do you use? Among those who do use performance models, Roofline is (expectedly) most popular by far, although there’s not much statistics with six voters.
  • The remaining questions have dealt with how useful it would be to have a dedicated “code optimization community” (whatever that means in practice), which received almost unanimous approval, and how attendees rated the BoF as a whole. 62% want to see more of this stuff in the future, which we interpret as encouraging. Two people even considered it the “best BoF ever.” So there.

As I wrote above, there is a strong bias in these answers, and the results are clearly not representative of the HPC community. But if we accept that those who attended the BoF are really interested in optimizing application code, the conclusion must be that recent developments in tools and models trickle into this community at a very slow rate.

 

SC14 Intel Parallel Universe Computing Challenge: The Squad fails in the semi-final

All good things must end. In a heroic battle the Gaussian Elimination Squad was beaten by the “Brilliant Dummies” from Seoul University. We were quite a bit ahead after the trivia round, but the coding challenge was a real neck-breaker. We couldn’t even find the spot where one should start leveraging parallelism, and we ended up with a disappointing speedup of just above 1.

The Squad congratulates the Korean team, it was a great fight! We hope to return next year to SC15 in Austin, Texas.

 

SC14 Parallel Universe Computing Challenge Match #1: The Squad smashes the “Invincible Buckeyes”

At 8pm Central Time today, the Gaussian Elimination Squad fought their first round of the Intel Parallel Universe Computing Challenge against the “Invincible Buckeyes,” representing the Ohio Supercomputing Center and Ohio State University, at the SC14 conference. I have to admit that I needed to look up “buckeye” in the dictionary, where I learned that “buckeye” is a tree (going by the scientific name “Aesculus“, also known as the “chestnut”) as well as a nickname for residents of the U.S. state of Ohio. Nice pun, but no match for the kick-ass, brilliancy-radiating nom de guerre of the German team!

Just as last year, this is a single-elimination tournament, with eight teams fighting one-on-one in seven matches. Each match consists of two parts. The first is a trivia round, with questions about computing, parallelism, the history of supercomputing, and the SC conference series. The faster you select the correct answer (out of four), the more points you collect. After the trivia we had a slight advantage, but there was no reason to be over-confident. In the second part, the coding challenge, we had ten minutes to parallelize and optimize a piece of code (that we hadn’t seen before, of course). In a team effort without equal we managed to get a mind-boggling 240x performance speedup compared to the original version on the Intel Xeon Phi! The Buckeyes were stuck at something like 6x. Hence, this round goes to The Squad, and we are eagerly looking forward to the semi-finals on Wednesday. 

Team members for “Gaussian Elimination Squad” now official

In case you haven’t heard it yet: The “Gaussian Elimination Squad” will defend their title in the “Intel Parallel Universe Computing Challenge” at SC14 in New Orleans. We have finally put together the team! Here it is:

  • Christian Terboven (University of Aachen IT Center)
  • Michael Ott (Leibniz Supercomputing Center)
  • Guido Juckeland (Technical University of Dresden)
  • Michael Kluge (Technical University of Dresden)
  • Christian Iwainsky (Technical University of Darmstadt)
  • Jan Treibig (Erlangen Regional Computing Center)
  • Georg Hager (Erlangen Regional Computing Center)

The teams all get their share of attention now at Intel’s website. AFAIK they are still looking for submissions, so take your chance to get thrashed! Figuratively speaking, of course.

Blaze 2.1 has been released

Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic based on Smart Expression Templates. Just in time for ISC’14, Blaze 2.1 has been released. The focus of this latest release is the improvement and extension of the shared memory parallelization capabilities: In addition to OpenMP parallelization, Blaze 2.1 now also enables shared memory parallelization based on C++11 and Boost threads.  Additionally, the underlying architecture for the parallel execution and automatic, context-sensitive optimization has been significantly improved. This not only improves the performance but also allows for an efficient parallel handling of computations involving block-structured vectors and matrices.

All of the Blaze documentation and code can be downloaded at https://code.google.com/p/blaze-lib/.

Bill Gropp on “Engineering for Performance in High Performance Computing”

At the PASC14 conference in Zurich, Bill Gropp gave a great talk on Performance Engineering, the way it should be done. 100% along the lines of what we try to teach people in our tutorials! https://www.youtube.com/watch?v=sadfSARXSC0