Although the failure of ANSI C to define a form of variable-argument declaration that can cleanly handle an extended-precision long double
type whose format differs from from double
has led to the type's being effectively deprecated on many platforms (unfortunate, IMHO, since it was a good type for use not only on systems with x87 coprocessors but also on systems with no FPU), the only sane way for a system with proper extended-precision types to handle a statement like:
long double a = 0.1;
is to have the 0.1 numeric literal start life as a long double
equal to 14,757,395,258,967,641,293/147,573,952,589,676,412,928; it would be absurd to have that statement set a
to 7,205,759,403,792,794/72,057,594,037,927,936 (roughly 0.10000000000000000555, the value of (double)0.1
).
There might arguably be a few cases where having a numeric literal start life as a long double
prior to getting down-converted might cause it to yield a different value from what it would if it started life as a double
or float
(e.g. the closest float
to 9007199791611905.0
is 9007200328482816, which is 536870911 above the requested value, but (float)(double)9007199791611905.0
yields 9007199254740992, which is 536870913 below it. Of course, if one wants the float value 9007200328482816.0f, one should probably use a decimal representation that is closer to what one really wants.