I have simple code:
#include <stdio.h>
int main()
{
//char d[10] = {0x13, 0x43, 0x9b, 0x64, 0x28, 0xf8, 0xff, 0x7f, 0x00, 0x00};
//long double rd = *(long double*)&d;
long double rd = 3.3621e-4932L;
printf("%Le\n", rd);
return 0;
}
On my Ubuntu x64 it prints as expected 3.362100e-4932. On my NetBSD it prints 1.681050e-4932
Why it happens and how can I fix it? I try clang and gcc with same result.
My system (VM inside VirtualBox 5.0):
uname -a
NetBSD netbsd.home 7.0 NetBSD 7.0 (GENERIC.201509250726Z) amd64
gcc --version
gcc (nb2 20150115) 4.8.4
clang --version
clang version 3.6.2 (tags/RELEASE_362/final)
Target: x86_64--netbsd
Thread model: posix
FYI
/usr/include/x86/float.h
defines as LDBL_MIN
as 3.3621031431120935063E-4932L
And this value is greater than printf result.