Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double
in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double
and long long int
types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g
specification is a quick way to print out a double
in (usually) easily readable form. The %llX
format prints an unsigned long long int
in hexadecimal. The byte order is implementation-dependent; even if you know that both double
and long long int
have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d
expression (reading from right to left) will take the address of d
, convert that double*
pointer to a long long *
pointer, then dereference that pointer to get a long long
value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double
)
You can find out more about printf
formatting at:
http://www.cplusplus.com/reference/cstdio/printf/