I am debugging code that implements an algorithm whose main loop terminates when a statement à la s >= u || s <= l
is true, where s
, u
and l
are double
s that are updated in the main loop. In this example, all three variables are always between 0.5
and 1.5
. I am not including the code here, as it is not written by me and extracting a MWE is hard. I am puzzled by the code behaving differently on different architectures, and I'm hoping the clues below can help me narrow down the error in the algorithm.
Some floating point rounding seems to be the root cause of the bug. Here is what I have ascertained so far:
- The algorithm terminates correctly on all optimization levels on x86-64.
- The algorithm terminates correctly with -O3 (other opt levels were not tried) on arm64, mips64 and ppc64.
- The algorithm terminates correctly with -O0 on i686.
- The algorithm loops indefinitely with -O1, -O2 and -O3 on i686.
- Main point of question: In the cases when the algorithm loops indefinitely, it can be made to terminate correctly if
s
is printed (std::cout << s << std::endl
) before it is compared tol
andu
.
What kind of compiler optimizations could be relevant here?
All behaviors above were observed on a GNU/Linux system and reproduced with GCC 6.4, 7.3 and 8.1.