In IEEE 754 floating point, it is possible that
a*(b-c) != a*b-a*c // a, b, c double
So expansion of a product is not guaranteed to be equal to the unexpanded product.
But what about this:
a*(b1+b2+...+bn) == a*b1+a*b2+...+a*bn // b1==b2==...==bn
When all b
equal, is equivalence guaranteed (in case of no under-/overflow)? Is there a difference if equality of b
is known at compile time or not?
Edit:
It is not - see Eric Postpischil and Pascal Cuoq.
But maybe holds the weaker assertion?:
(1.0/n)*(b1+b2+...+bn) <= 1.0
&& (1.0/n)*b1+(1.0/n)*b2+...+(1.0/n)*bn <= 1.0
// when all b<=1.0 and n integral double but not power of 2
// so that 1.0/n not exactly representable with base-2 floating point
I simply wonder if you can guarantee that the average of a data set does not exceed some value that is also not exceeded by every single data value, no matter how you compute the average (first adding and once dividing, or adding every value divided).
Edit2:
Ok, the &&
doesn't hold. See Eric Postpischil and David Hammen:
average of nine 1.0 -> only first condition is true, second exceeds.
average of ten 1.0/3 -> only second condition is true, first exceeds.
Is then the optimal method of computation of an average dependent of upper expected limit of data set? Or also of size (that means n
) of data set? Or is there no optimal method surely existing?