Try the following code :
#include <stdio.h>
unsigned char TEST_COMPILER_AS13 = 1.18 * 100;
int main(void) {
printf("%d" , TEST_COMPILER_AS13);
// your code goes here
return 0;
}
using C https://www.codechef.com/ide
the result would be 117
if we replaced 1.18 with 1.17 the result would be 116 , yet if we used 1.19,1.20,1.15 etc the result would be correct 119 , 120 , 115.
using different online compiler say :http://codepad.org/ the results for 1.18 and 1.17 would be okey yet then if you tried 1.13 , 1.14 , 1.15 it would 112 , 113 , 114 respectively
I'm rubbing my head and i cant understand why this happen. Note: I have tried different compilers DIAB, COSMIC , MinGw.. etc all have similar issue. so what im missing here or how the floating point operations are done.
Note: to solve this issue you could just cast the operation with float so the declaration would be as follow
unsigned char TEST_COMPILER_AS13 = (float) (1.18 * 100);
I'm open to your answers , i really want to understand how this works. why it works for some number and others wont, Why compiler differs on the way to handle it , is there compiler options which would affect the compiler behaviour