I am going through a C application code and find that float by integer division increases the precision and I don't understand why. E.g. dividing a floating point number 12926.0 by the integer 100 should result in 129.26, but instead I am getting the result 129.259995
Below is a simple code representation of the actual code.
#include<stdio.h>
#include<stdlib.h>
void main()
{
int i=12926;
float k=0.0;
k=(float) i / 100;
printf("k <%f>\n",k);
}
I have searched for explanations and tried evaluating values and expressions while debugging in GDB -
(gdb) print 12926.0 / 100.0
$39 = 129.25999999999999
(gdb) print (float) i / 100
$40 = 129.259995
(gdb) print i / 100
$41 = 129
Flummoxed why C is showing the result as 129.259995 instead of 129.26. Is there a way to control the precision length to get 129.26 as the result?