The following code is terrible, but was encountered in a production situation. It was solved by doing something less insane - but I can't work out why the value remains constant. FWIW this arbitrary large value was taken from a timestamp.
#import <stdio.h>
int main(void)
{
float wtf = 466056.468750;
while(wtf > .01)
{
wtf -= .01;
/* other operations here */
printf("wtf = %f\n", wtf);
}
return 0;
}
When the program is run the output produced is
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
wtf = 466056.468750
When debugging with, I can see that an appropriate value is returned for the expression wtf - .01
but it just doesn't seem to persist.
My question is, why isn't the decremented value stored in the variable?
In gdb the value of the operation is printed out as follows
10 printf("wtf = %f\n", wtf);
(gdb) p wtf
$1 = 466056.469
(gdb) p wtf - .01
$2 = 466056.45874999999
(gdb) n
Whilst there is a clear change in precision, the value 466056.45874999999 is neither 466056.469 nor 466056.468750 (the value printed to the console)