If you program in C++ and care about performance, you have probably heard about the Stepanov abstraction benchmark [1]. It is a simple sum reduction that adds 2000 double-precision floating-point numbers using 13 code variants. The variants are successively harder to see through by the compiler because they add layers upon layers of abstractions. The first variant (i.e., the baseline) is plain C code and looks like this:
// original source of baseline sum reduction
void test0(double* first, double* last) {
start_timer();
for(int i = 0; i < iterations; ++i) {
double result = 0;
for (int n = 0; n < last - first; ++n) result += first[n];
check(result);
}
result_times[current_test++] = timer();
}
It is quite easy to figure out how fast this code can possibly run on a modern CPU core. There is one LOAD and one ADD in the inner loop, and there is a loop-carried dependency due to the single accumulation variable result
. If the compiler adheres to the language standard it cannot reorder the operations, i.e., modulo variable expansion to overlap the stalls in the ADD pipeline is ruled out. Thus, on a decent processor such as, e.g., a moderately modern Intel design, each iteration will take as many cycles as there are stages in the ADD pipeline. All current Intel CPUs have an ADD pipeline of depth three, so the expected performance will be one third of the clock speed in GFlop/s.
If we allow some deviation from the language standard, especially unsafe math optimizations, then the performance may be much higher, though. Modulo variable expansion (unrolling the loop by at least three and using three different result variables) can overlap several dependency chains and fill the bubbles in the ADD pipelines if no other bottlenecks show up. Since modern Intel CPUs can do at least one LOAD per cycle, this will boost the performance to one ADD per cycle. On top of that, the compiler can do another modulo variable expansion for SIMD vectorization. E.g., with AVX four partial results can be computed in parallel in a 32-byte register. This gives us another factor of four.
Original baseline assembly code |
-O3 -march=native |
-O3 -ffast-math -march=native |
vxorpd %xmm0, %xmm0, %xmm0
.L17:
vaddsd (%rax), %xmm0, %xmm0
addq $8, %rax
cmpq %rbx, %rax
jne .L17
|
vxorpd %xmm1, %xmm1, %xmm1
.L26:
addq $1, %rcx
vaddpd (%rsi), %ymm1, %ymm1
addq $32, %rsi
cmpq %rcx, %r13
ja .L26
vhaddpd %ymm1, %ymm1, %ymm1
vperm2f128 $1, %ymm1, %ymm1, %ymm3
vaddpd %ymm3, %ymm1, %ymm1
vaddsd %xmm1, %xmm0, %xmm0
|
Now let us put these models to the test. We use an Intel Xeon E5-2660v2 “Ivy Bridge” running at a fixed clock speed of 2.2 GHz (later models can run faster than four flops per cycle due to their two FMA units). On this CPU the Stepanov peak performance is 8.8 GFlop/s for the optimal code, 2.93 GFlop/s with vectorization but no further unrolling, 2.2 GFlop/s with (at least three-way) modulo unrolling but no SIMD, and 733 MFlop/s for standard-compliant code. The GCC 6.1.0 was used for all tests, and only the baseline (i.e., C) version was run.
Manual assembly code inspection shows that the GCC compiler does not vectorize or unroll the loop unless -ffast-math
allows reordering of arithmetic expressions. Even in this case only SIMD vectorization is done but no further modulo unrolling, which means that the 3-stage ADD pipeline is the principal bottleneck in both cases. The (somewhat cleaned) assembly code of the inner loop for both versions is shown in the first table. No surprises here; the vectorized version needs a horizontal reduction across the ymm1
register after the main loop, of course (last four instructions).
Original baseline code performance |
g++ options |
Measured [MFlop/s] |
Expected [MFlop/s] |
-O3 -march=native |
737.46 |
733.33 |
-O3 -ffast-math -march=native |
2975.2 |
2933.3 |
In defiance of my usual rant I give the performance measurements with five significant digits; you will see why in a moment. I also selected the fastest among several measurements, because we want to compare the highest measured performance with the theoretically achievable performance. Statistical variations do not matter here. The performance values are quite close to the theoretical values, but there is a very slight deviation of 1.3% and 0.5%, respectively. In performance modeling at large, such a good coincidence of measurement and model would be considered a success. However, the circumstances are different here. The theoretical performance numbers are absolute upper limits (think “Roofline”)! The ADD pipeline depth is not 2.96 cycles but 3 cycles after all. So why is the baseline version of the Stepanov test faster than light? Can the Intel CPU by some secret magic defy the laws of physics? Is the compiler smarter than we think?
A first guess in such cases is usually “measuring error,” but this was ruled out: The clock speed was measured by likwid-perfctr to be within 2.2 GHz with very high precision, and longer measurement times (by increasing the number of outer iterations) do not change anything. Since the assembly code looks reasonable, the only conclusion left is that the dependency chain on the target register, which is completely intact in the inner loop, gets interrupted between iterations of the outer loop because the register is assigned a new value. The next run of the inner loop can thus start already before the previous run has ended, leading to overlap. A simple test supports this assumption: If we cut the array size in half, the relative deviation doubles. If we reduce it to 500, the deviation doubles again. This strongly indicates an overlap effect (absolute runtime reduction) that is independent of the actual loop size.
In order to get a benchmark that stays within the light speed limit, we modify the code so that the result is only initialized once, before the outer loop:
// modified code with intact (?) dependency chain
void test0(double* first, double* last) {
start_timer();
double result = 0; // moved outside
for(int i = 0; i < iterations; ++i) {
for (int n = 0; n < last - first; ++n) result += first[n];
if(result<0.) check(result);
}
result_times[current_test++] = timer();
}
The result check is masked out since it would fail now, and the branch due to the if
condition can be predicted with 100% accuracy. As expected, the performance of the non-SIMD code now falls below the theoretical maximum. However, the SIMD code is still faster than light.
Modified baseline code performance |
g++ options |
Measured [MFlop/s] |
Expected [MFlop/s] |
-O3 -march=native |
733.14 |
733.33 |
-O3 -ffast-math -march=native |
2985.1 |
2933.3 |
How is this possible? Well, the dependency chain is doomed already once SIMD vectorization is done, and the assembly code of the modified version is very similar to the original one. The horizontal sum over the ymm1
register puts the final result into the lowest slot of ymm0
, so that ymm1
can be initialized with zero for another run of the inner loop. From a dependencies point of view there is no difference between the two versions. Accumulation into another register is ruled out for the standard-conforming code because it would alter the order of operations. Once this requirement has been dropped, the compiler is free to choose any order. This is why the -ffast-math
option makes such a difference: Only the standard-conforming code is required to maintain an unbroken dependency chain.
Of course, if the g++ compiler had the guts to add another layer of modulo unrolling on top of SIMD (this is what the Intel V16 compiler does here), the theoretical performance limit would be ADD peak (four additions per cycle, or 8.8 GFlop/s). Such a code must of course stay within the limit, and indeed the best Intel-compiled code ends up at 93% of peak.
Note that this is all not meant to be a criticism of the abstraction benchmark; I refuse to embark on a discussion of religious dimensions. It just happened to be the version of the sum reduction I have investigated closely enough to note a performance level that is 1.3% faster than “the speed of light.”
[1] http://www.open-std.org/jtc1/sc22/wg21/docs/D_3.cpp