1

Python:

machine_epsilon = np.finfo(float).eps
first_variant = 1 + machine_epsilon + machine_epsilon/2
second_variant = 1 + machine_epsilon/2 + machine_epsilon
print ('%.20f' % first_variant)
print ('%.20f' % second_variant)

C:

double eps2 = DBL_EPSILON;
printf("1 + eps / 2 +eps %.20f \n", 1. + eps2 / 2. + eps2);
printf("1 + eps +eps/2 %.20f \n", 1.+ eps2 + eps2 / 2.);

It resulted in 1.00000000000000044409 for the first_variant and 1.00000000000000022204 for second_variant, i.e. fractional part is 2 more.

Who can explain this?

mkrieger1
  • 19,194
  • 5
  • 54
  • 65

4 Answers4

1

Why is
1 + machine_epsilon + machine_epsilon/2
not equal to
1 + machine_epsilon/2 + machine_epsilon?

Roundings occur with different values in the expression leading to different sums.


Let us look at this in smaller steps and using "%a".

double eps2 = DBL_EPSILON;
printf("%-17s:%21a +  %11a = %21a\n", "1 + eps/2", //
    1.0, eps2 / 2.0, 1. + eps2 / 2.0);
printf("%-17s:%21a +  %11a = %21a\n", "(1 + eps/2) + eps", //
    1.0 + eps2 / 2, eps2, 1.0 + eps2 / 2.0 + eps2);

printf("%-17s:%21a +  %11a = %21a\n", "1 + eps", //
    1.0, eps2, 1. + eps2);
printf("%-17s:%21a +  %11a = %21a\n", "(1 + eps) + eps/2", //
    1. + eps2, eps2, 1.0 + eps2 + eps2 / 2.0);

Output

1 + eps/2        :               0x1p+0 +      0x1p-53 =                0x1p+0
(1 + eps/2) + eps:               0x1p+0 +      0x1p-52 =  0x1.0000000000001p+0
1 + eps          :               0x1p+0 +      0x1p-52 =  0x1.0000000000001p+0
(1 + eps) + eps/2: 0x1.0000000000001p+0 +      0x1p-52 =  0x1.0000000000002p+0

Consider 3 consecutive double values, A = 1.0, B = the next biggest double or 1.0 + eps and the next double C = 1.0 + 2*eps.

The exact 1 + eps/2 sum is half way between A and B. With typical rounding (round to nearest, ties to even), the sum rounds to the best double A (down). A is even as its least significant bit is 0, B is odd.

(1 + eps / 2) + eps sum is then like 1 + eps, or B.

1 + eps sum is B.

(1 + eps) + eps / 2 sum is half half way between B and C. With typical rounding (round to nearest, ties to even), the sum rounds to C (up). C is even as its least significant bit is 0, B is odd.


Other possibilities exist if printf("%d\n", FLT_EVAL_METHOD); is 2. In that case, all C floating point math here would use long double for intermediate results and then the final 1st,2nd variant double results would be expected to be the same.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
0

Floating-point types, like double or float in C, have a limited precision (e.g. 53 significand bits for a double precision IEEE 754 binary64), so that they cannot represent all the real numbers and the mathematical operations in which they are involved don't have all the properties expected in exact computing, in particular they are not commutative.

The OP is trying to evaluate

1 + machine_epsilon + machine_epsilon/2

Which is challenging, because

  • That number is one of those that cannot be exactly represented by a double (or a float), while both 1 + machine_epsilon and 1 + 2 * machine_epsilon can.

  • When an expression is evaluated, even if the hardware can perform the calculation and evaluate the intermediate values using a higher precision, the stored result has to be rounded to one of the representable values. There are many tipes of roundings to nearest, like ties away from zero or ties to even.

In the posted examples, 1 + machine_epsilon/2 is rounded to 1 and then, adding machine_epsilon, the result becomes 1 + machine_epsilon, while 1 + machine_epsilon + machine_epsilon/2 is rounded to 1 + 2 * machine_epsilon.

Bob__
  • 12,361
  • 3
  • 28
  • 42
  • This doesn't explain the difference between the results. According to your answer, the two should be the same. – Olaf Dietsche Sep 02 '21 at 08:49
  • @OlafDietsche Yes, and that's exactly what I get in C: https://godbolt.org/z/xva9s464b – Bob__ Sep 02 '21 at 08:58
  • but if you try double eps2 = DBL_EPSILON; you will get results like mine – Katya Kuznetsova Sep 02 '21 at 09:03
  • @KatyaKuznetsova Try to do the same calculation in multiple steps: https://godbolt.org/z/vTdaK1TYP – Bob__ Sep 02 '21 at 09:15
  • 1
    @Bob_ your example is wrong because the variables are float, but the constants are promoted to double, so all calculations are done in double and then rounded to float. In double, FLT_EPSILON is nothing special. – n. m. could be an AI Sep 06 '21 at 20:16
  • @n.1.8e9-where's-my-sharem. Good point. I felt the need to completely rephrase the answer. – Bob__ Sep 06 '21 at 21:25
0

There is limitation on floating point arithmatic, which is based on IEEE 754 standard. Since you need to round the numbers to display them, Let say an example 100/3 = 33.333333333333336 but in real world it is not 33.333333333333336. I means it is not only for machine epsilon, let say an example

print(1 + 1.05/2 + 1.05,1 + 1.05 + 1.05/2)
>>> (2.575, 2.5749999999999997)
print(1 + 1.05/2 + 1.05,1 + (1.05 + 1.05/2))
>>> (2.575, 2.575)

The second example gives you same output becase (1.05 + 1.05/2) is adding with 1

You are getting the different ouput because order of calculation, explanation as follows,

a = 1 + machine_epsilon
print(a, a +  machine_epsilon/2)
>>> (1.0000000000000002, 1.0000000000000004)
a = 1 + machine_epsilon/2
print(a,a + machine_epsilon)
>>> (1.0, 1.0000000000000002)

So, you should considering adding parenthesis,

import numpy as np
machine_epsilon = np.finfo(float).eps
first_variant = 1 + machine_epsilon + machine_epsilon/2
second_variant = 1 + (machine_epsilon/2 + machine_epsilon)
print(first_variant,second_variant)
>>> (1.0000000000000004, 1.0000000000000004)

For more details, please refer, Is floating point math broken?

Rinshan Kolayil
  • 1,111
  • 1
  • 9
  • 14
0

One of the things you may have learned in school is that addition is commutative and associative: the order doesn't matter. In pure mathematics, a + b == b + a, and (a + b) + c == a + (b + c).

But one of the more surprising things about the finite precision floating-point arithmetic we use on computers every day is that addition is not commutative or associative. Many times, the order does matter, and this is part of the explanation for the surprising result you saw.

For example, what sometimes happens is that when b is small, a + b ends up being equal to a — adding in b ends up making no significant difference at all. But then for some other slightly larger value c, a + c can end up being different from a, and then the value a + c is such that a + c + b is actually different from a + c.

And since in your case you have chosen both b and c near the machine's "epsilon", you've pretty much guaranteed that you'll be operating down in the region where surprising roundoff results like this are going to happen.

I can't explain why you saw different results in C versus Python, though.

Steve Summit
  • 45,437
  • 7
  • 70
  • 103