why does
printf("is 0.5 <= 0.49999999999999999 ? %s", 0.5 <= 0.49999999999999999 ? "true":"false" );
return
is 0.5 <= 0.49999999999999999 ? true
*it's not a homework, just curious.
why does
printf("is 0.5 <= 0.49999999999999999 ? %s", 0.5 <= 0.49999999999999999 ? "true":"false" );
return
is 0.5 <= 0.49999999999999999 ? true
*it's not a homework, just curious.
The best way to answer your question is by changing the line of code slightly:
printf("is %.16e <= %.16e ? %s", 0.5, 0.49999999999999999, 0.5 <= 0.49999999999999999 ? "true":"false" );
Compiling this line, you will see that 0.49999999999999999
is rounded to the nearest representable double
, which is the same as for 0.5
(and indeed represents the exact value 1/2). This explains why one is lower than or equal to the other: they are two different notations for the same thing.
Note: I chose the format %.16e
as this format has the property never to print the same thing for distinct double-precision floating-point numbers. Experts prefer to use the hexadecimal format %a
. Not only hexadecimal is more compact, but using it makes immediately obvious that 0.5
is represented exactly (it is 0x0.8p0
or 0x1.0p-1
) and some other decimal numbers, such as 0.1, aren't (in hexadecimal the digits repeat and use up all the significand). In short, using hexadecimal for inputing and printing floating-point numbers will instantly make you a pro.