Georg Hager's Blog

Random thoughts on High Performance Computing

Inhalt

Stepanov test faster than light?

If you program in C++ and care about performance, you have probably heard about the Stepanov abstraction benchmark [1]. It is a simple sum reduction that adds 2000 double-precision floating-point numbers using 13 code variants. The variants are successively harder to see through by the compiler because they add layers upon layers of abstractions. The first variant (i.e., the baseline) is plain C code and looks like this:

// original source of baseline sum reduction
void test0(double* first, double* last) {
  start_timer();
  for(int i = 0; i < iterations; ++i) {
    double result = 0;
    for (int n = 0; n < last - first; ++n) result += first[n];
    check(result);
  }
  result_times[current_test++] = timer();
}

It is quite easy to figure out how fast this code can possibly run on a modern CPU core. There is one LOAD and one ADD in the inner loop, and there is a loop-carried dependency due to the single accumulation variable result. If the compiler adheres to the language standard it cannot reorder the operations, i.e., modulo variable expansion to overlap the stalls in the ADD pipeline is ruled out. Thus, on a decent processor such as, e.g., a moderately modern Intel design, each iteration will take as many cycles as there are stages in the ADD pipeline. All current Intel CPUs have an ADD pipeline of depth three, so the expected performance will be one third of the clock speed in GFlop/s.

If we allow some deviation from the language standard, especially unsafe math optimizations, then the performance may be much higher, though. Modulo variable expansion (unrolling the loop by at least three and using three different result variables) can overlap several dependency chains and fill the bubbles in the ADD pipelines if no other bottlenecks show up. Since modern Intel CPUs can do at least one LOAD per cycle, this will boost the performance to one ADD per cycle. On top of that, the compiler can do another modulo variable expansion for SIMD vectorization. E.g., with AVX four partial results can be computed in parallel in a 32-byte register. This gives us another factor of four.

Original baseline assembly code
-O3 -march=native -O3 -ffast-math -march=native
  vxorpd %xmm0, %xmm0, %xmm0
.L17:
  vaddsd (%rax), %xmm0, %xmm0
  addq $8, %rax
  cmpq %rbx, %rax
  jne .L17
  vxorpd %xmm1, %xmm1, %xmm1
.L26:
  addq $1, %rcx
  vaddpd (%rsi), %ymm1, %ymm1
  addq $32, %rsi
  cmpq %rcx, %r13
  ja .L26
  vhaddpd %ymm1, %ymm1, %ymm1
  vperm2f128 $1, %ymm1, %ymm1, %ymm3
  vaddpd %ymm3, %ymm1, %ymm1
  vaddsd %xmm1, %xmm0, %xmm0

Now let us put these models to the test. We use an Intel Xeon E5-2660v2 “Ivy Bridge” running at a fixed clock speed of 2.2 GHz (later models can run faster than four flops per cycle due to their two FMA units). On this CPU the Stepanov peak performance is 8.8 GFlop/s for the optimal code, 2.93 GFlop/s with vectorization but no further unrolling, 2.2 GFlop/s with (at least three-way) modulo unrolling but no SIMD, and 733 MFlop/s for standard-compliant code. The GCC 6.1.0 was used for all tests, and only the baseline (i.e., C) version was run.
Manual assembly code inspection shows that the GCC compiler does not vectorize or unroll the loop unless -ffast-math allows reordering of arithmetic expressions. Even in this case only SIMD vectorization is done but no further modulo unrolling, which means that the 3-stage ADD pipeline is the principal bottleneck in both cases. The (somewhat cleaned) assembly code of the inner loop for both versions is shown in the first table. No surprises here; the vectorized version needs a horizontal reduction across the ymm1 register after the main loop, of course (last four instructions).

Original baseline code performance
g++ options Measured [MFlop/s] Expected [MFlop/s]
-O3 -march=native 737.46 733.33
-O3 -ffast-math -march=native 2975.2 2933.3

In defiance of my usual rant I give the performance measurements with five significant digits; you will see why in a moment. I also selected the fastest among several measurements, because we want to compare the highest measured performance with the theoretically achievable performance. Statistical variations do not matter here. The performance values are quite close to the theoretical values, but there is a very slight deviation of 1.3% and 0.5%, respectively. In performance modeling at large, such a good coincidence of measurement and model would be considered a success. However, the circumstances are different here. The theoretical performance numbers are absolute upper limits (think “Roofline”)! The ADD pipeline depth is not 2.96 cycles but 3 cycles after all. So why is the baseline version of the Stepanov test faster than light? Can the Intel CPU by some secret magic defy the laws of physics? Is the compiler smarter than we think?

A first guess in such cases is usually “measuring error,” but this was ruled out: The clock speed was measured by likwid-perfctr to be within 2.2 GHz with very high precision, and longer measurement times (by increasing the number of outer iterations) do not change anything. Since the assembly code looks reasonable, the only conclusion left is that the dependency chain on the target register, which is completely intact in the inner loop, gets interrupted between iterations of the outer loop because the register is assigned a new value. The next run of the inner loop can thus start already before the previous run has ended, leading to overlap. A simple test supports this assumption: If we cut the array size in half, the relative deviation doubles. If we reduce it to 500, the deviation doubles again. This strongly indicates an overlap effect (absolute runtime reduction) that is independent of the actual loop size.

In order to get a benchmark that stays within the light speed limit, we modify the code so that the result is only initialized once, before the outer loop:

// modified code with intact (?) dependency chain
void test0(double* first, double* last) {
  start_timer();
  double result = 0; // moved outside
  for(int i = 0; i < iterations; ++i) {
    for (int n = 0; n < last - first; ++n) result += first[n];
    if(result<0.) check(result);
  }
  result_times[current_test++] = timer();
}

The result check is masked out since it would fail now, and the branch due to the if condition can be predicted with 100% accuracy. As expected, the performance of the non-SIMD code now falls below the theoretical maximum. However, the SIMD code is still faster than light.

Modified baseline code performance
g++ options Measured [MFlop/s] Expected [MFlop/s]
-O3 -march=native 733.14 733.33
-O3 -ffast-math -march=native 2985.1 2933.3

How is this possible? Well, the dependency chain is doomed already once SIMD vectorization is done, and the assembly code of the modified version is very similar to the original one. The horizontal sum over the ymm1 register puts the final result into the lowest slot of ymm0, so that ymm1 can be initialized with zero for another run of the inner loop. From a dependencies point of view there is no difference between the two versions. Accumulation into another register is ruled out for the standard-conforming code because it would alter the order of operations. Once this requirement has been dropped, the compiler is free to choose any order. This is why the -ffast-math option makes such a difference: Only the standard-conforming code  is required to maintain an unbroken dependency chain.

Of course, if the g++ compiler had the guts to add another layer of modulo unrolling on top of SIMD (this is what the Intel V16 compiler does here), the theoretical performance limit would be ADD peak (four additions per cycle, or 8.8 GFlop/s). Such a code must of course stay within the limit, and indeed the best Intel-compiled code ends up at 93% of peak.

Note that this is all not meant to be a criticism of the abstraction benchmark; I refuse to embark on a discussion of religious dimensions. It just happened to be the version of the sum reduction I have investigated closely enough to note a performance level that is 1.3% faster than “the speed of light.”

[1] http://www.open-std.org/jtc1/sc22/wg21/docs/D_3.cpp

 

Blaze 2.1 has been released

Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic based on Smart Expression Templates. Just in time for ISC’14, Blaze 2.1 has been released. The focus of this latest release is the improvement and extension of the shared memory parallelization capabilities: In addition to OpenMP parallelization, Blaze 2.1 now also enables shared memory parallelization based on C++11 and Boost threads.  Additionally, the underlying architecture for the parallel execution and automatic, context-sensitive optimization has been significantly improved. This not only improves the performance but also allows for an efficient parallel handling of computations involving block-structured vectors and matrices.

All of the Blaze documentation and code can be downloaded at https://code.google.com/p/blaze-lib/.

Restricting member function calls by numeric template parameters

Thanks to Johannes for this interesting problem.

Let’s say you have a class template B where T is an arbitrary type and C is an integer template argument:

template  <class T, int C> B;

B has two member functions with the same name:

template  <class T, int C> void B::member(T _t);
template  <class T, int C> void B::member(T _t, int _i);

How do you make sure that the first member (the one with just a single argument) can only be called for instances of the class template with C==1? This is supposed to happen at compile time (runtime would be easy, of course).

One could (partially) specialize the whole class for C=1, which generates a whole lot of code bloat. Another solution would be to have a base class with only the two-argument member and a derived class (inheriting from B) implementing the single-argument member. This is also unsatisfactory because the derived class must have a name different from the first.

A more elegant solution is to have a private template class declaration encapsulated into the base class which gets instantiated when calling the single-argument member:

template  class B {
  template  <int U> class BX {
  public:
    BX(B<T,U>* p) {}
  };
public:
  B();
  // no problem here
  void member(T _x, int _i);
  // only valid if C==1
  void member(T _x) {
    BX<2-C> bx(this);
    ...
  }
  ...
};

Only if C==1 will calling the single-argument member not fail, because BX<2-C> gets instantiated using a pointer to class B. If C!=1, the compiler spits out an error message saying that it can’t find a constructor with the appropriate argument. In this special case we could just have used BX<1>, but I wanted to show that any simple integer expression will do (see below).

Admittedly, the error message is a little clumsy, but it does the job. This example can easily be generalized – remember that the ternary operator will also be evaluated at compile time. All you have to do is provide a function of C that is equal to C for all permitted values of C, and different from C otherwise.

I’ve checked that this trick works using the Intel 10.1 and GNU 4.1.3 compilers.