I have two code snippets and both produce different results. I am using TDM-GCC 4.9.2 compiler and my compiler is 32-bit version
( Size of int is 4 bytes and Minimum value in float is -3.4e38 )
Code 1:
int x;
x=2.999999999999999; // 15 '9s' after decimal point
printf("%d",x);
Output:
2
Code 2:
int x;
x=2.9999999999999999; // 16 '9s' after decimal point
printf("%d",x);
Output:
3
Why is the implicit conversion different in these cases?
Is it due to some overflow in the Real constant specified and if so how does it happen?