0

I have this code:

#include  <stdio.h>

int main(void)
{
    float c = 100.001000;
    printf("%f\n",c);
    return 0;
}

But result is:

100.000999

Why this happen? Where is rule for this result in C standards?

Biffen
  • 6,249
  • 6
  • 28
  • 36
S.rah
  • 1
  • 1
    In binary floating point, there's no such number as exactly 100.001. (Just as in decimal floating point, there's no such number as exactly 1/3.) – Steve Summit May 25 '18 at 10:52
  • As the constant `100.001000` is not exactly representable as a `double`, the closest alternative is used `100.00100000000000477...`. Yet this `double` constant fails to store exactly as a `float`. The closest is `100.00099945068359375` which when printed to 6 places after the `.` rounds , as text, to `100.000999` – chux - Reinstate Monica May 25 '18 at 11:07

0 Answers0