$ gcc --version
Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 12.0.5 (clang-1205.0.22.9)
Target: x86_64-apple-darwin20.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
$ uname -a
Darwin MacBook-Air.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64 x86_64
Code:
#include <stdio.h>
int main(void) {
int i = 2;
printf("int \"2\" as %%.128f: %.128f\n", i);
printf("int \"2\" as %%.128lf: %.128lf\n", i);
printf("int \"2\" as %%.128LF: %.128Lf\n", i);
return 0;
}
Compile:
$ gcc floatingpointtypes.c
Execute:
$ ./a.out
int "2" as %.128f: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
int "2" as %.128lf: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
int "2" as %.128LF: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
When binary of integer 2 is interpreted as IEEE-754 single precision (32 bit) or double precision (64 bit) floating point number format then it is a denormalized float (exponent bits are all 0s) and the resultant value in decimal is 2e-148.
Question:
Why does my code print 0s? Is it because C can't interpret denormalized floating point numbers to their correct value ?