I'm playing around with a PPC64 virtual machine emulated in qemu, trying to mimic a POWER8 CPU.
Here, the long double
type is different from the 80-bit float used in x86 for long doubles, and from what I see it does not conform to IEEE754's float128 either, seeing as it has a mantissa with 106 bits according to the C macro LDBL_MANT_DIG
(vs. 112 bits of mantissa dictated by IEEE754 for their float128).
Wikipedia says that an IEEE754 float128 should have a machine epsilon of around 1.93e-34, which is much better than that of an 80-bit x86 float (1.08e-19).
When I try to get the machine epsilon in this virtual machine however, I get a rather surprising answer:
#include <iostream>
int main()
{
long double eps = 1.0l;
while (1.0l + 0.5l * eps != 1.0l)
eps = 0.5l * eps;
std::cout << eps << std::endl;
return 0;
}
It outputs the following:
4.94066e-324
And I get the same result from LDBL_EPSILON
and from std::numeric_limits<long double>::epsilon()
.
That would make it roughly 10x more precise than what it's supposed to be, which logic tells me should be impossible. Seeing as the mantisa is exactly 2x53 (that of IEEE754's float64), I assume it might be using a double-double struct, which Wikipedia also says should have less precision around small numbers than an IEEE754 float128.
What is happening here?