Possible Duplicate:
C compiler bug (floating point arithmetic)?
I've got two doubles which I can guarantee are exactly equal to 150 decimal places - ie. the following code:
printf("***current line time is %5.150lf\n", current_line->time);
printf("***time for comparison is %5.150lf\n", (last_stage_four_print_time + FIVE_MINUTES_IN_DAYS));
...returns:
***current line time is 39346.526736111096397507935762405395507812500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
***time for comparison is 39346.526736111096397507935762405395507812500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
FIVE_MINUTES_IN_DAYS
is #defined, and current_line->time
and last_stage_four_print_time
are both doubles.
My problem is that the next line of my debugging code:
printf("if condition is %d\n", (current_line->time >= (last_stage_four_print_time + FIVE_MINUTES_IN_DAYS)));
returns the following:
if condition is 0
Can anyone tell me what's going on here? I am aware of the non-decimal/inexact nature of floats
and doubles
but these are not subject to any error at all (the original figures have all been read with sscanf
or #defined and are all specified to 10 decimal places).
EDIT: My mistake was assuming that printf
-ing the doubles accurately represented them in memory, which was wrong because one value is being calculated on-the-fly. Declaring (last_stage_four_print_time + FIVE_MINUTES_IN_DAYS)
as threshold_time
and using that instead fixed the problem. I will make sure to use an epsilon for my comparisons - I knew that was the way to go, I was just confused as to why these values which I (incorrectly) thought looked the same were apparently inequal.