0

I would like to better understand floating point values and the imprecisions associated.

Following are two snippets with slight modifications

Snippet 1

#include <stdio.h>

int main(void)
{
   float a = 12.59;
   printf("%.100f\n", a);
}

Output:

12.5900001525878906250000000000000000000000000000000000000000000000000000000000000000000000000000000000

Snippet 2:

#include <stdio.h>

int main(void)
{
   printf("%.100f\n", 12.59);
}

Output:

12.589999999999999857891452847979962825775146484375000000000000000000000000000000000000000000000000000

Why is there a difference in both the outputs? I'm unable to understand the catch.

Hells Guardian
  • 395
  • 1
  • 4
  • 16

3 Answers3

2

In first case you have defined variable as a float and in second case you directly given the number.

System might be consider direct number as a double and not float.

So,I think may be it is because of system definition of float and double.

Nutan
  • 778
  • 1
  • 8
  • 18
2

to get the consistent behaviour you can explicitly use floating point literal:

printf("%.100f\n", 12.59f);
kergma
  • 161
  • 4
1

In Snippet 1, your float gets cast into a double, and this casting causes a change in the value (due to the intricacies of floating point representation).

In Snippet 2, this cast doesn't happen, it's printed directly as a double.

To understand, try running the snippets below:

#include <stdio.h>

int main(void) {
    double a = 12.59;
    printf("%.100f\n", a);  
    return 0;
}

and

int main(void) {
    float a = 12.59;
    printf("%.100f\n", (double)a);  
    return 0;
}

Refer to this for more information: How does printf and co differentiate beetween float and double

Fadhil Abubaker
  • 137
  • 1
  • 9