For those who constantly wonder why floating-point results for the same numerical problem tend to vary across architectures, compilers, compiler options and parallel program runs, this paper may be interesting:
David Monniaux, The pitfalls of verifying floating-point computations. arXiv:cs/0701192v5
Especially the first chapters form a nice intro on things you always wanted to know about floating-point but never dared to ask…