In this answer somebody writes
[..] most compilers won't optimize a + b + c + d to (a + b) + (c + d) (this is an optimization since the second expression can be pipelined better)
The original question was about how certain expressions involving float
value can or can not be re-ordered due to the imprecision of floating point arithmetic.
I'm more interested in the above part though: why - say, with unsigned int
values - would it be easier to generate code which exploits CPU pipelines if a+b+c+d
is rewritten as (a+b)+(c+d)
?