I would like to better understand floating point values and the imprecisions associated.
Following are two snippets with slight modifications
Snippet 1
#include <stdio.h>
int main(void)
{
float a = 12.59;
printf("%.100f\n", a);
}
Output:
12.5900001525878906250000000000000000000000000000000000000000000000000000000000000000000000000000000000
Snippet 2:
#include <stdio.h>
int main(void)
{
printf("%.100f\n", 12.59);
}
Output:
12.589999999999999857891452847979962825775146484375000000000000000000000000000000000000000000000000000
Why is there a difference in both the outputs? I'm unable to understand the catch.