1

I was bored and wanted to see what the binary representation of double's looked like. However, I noticed something weird on windows. The following lines of codes demonstrate

double number = 1;
unsigned long num = *(unsigned long *) &number;
cout << num << endl;

On my Macbook, this gives me a nonzero number. On my Windows machine it gives me 0.

I was expecting that it would give me a non zero number, since the binary representation of 1.0 as a double should not be all zeros. However, I am not really sure if what I am trying to do is well defined behavior.

My question is, is the code above just stupid and wrong? And, is there a way I can print out the binary representation of a double?

Thanks.

zrbecker
  • 1,394
  • 1
  • 19
  • 39
  • Just to be sure, it was an Intel Macbook? Anyway, verify by printing the value of sizeof(unsigned long) on both machines / compilers. Anyway, for reliable result, cast to char pointer and do a hex dump to see the bytes. – hyde Feb 25 '13 at 22:13
  • Check out http://babbage.cs.qc.cuny.edu/IEEE-754/ – Mark Ransom Feb 25 '13 at 22:14

2 Answers2

5

1 double is 3ff0 0000 0000 0000. long is a 4 byte int. On a little endian hardware you're reading the 0000 0000 part.

Remus Rusanu
  • 288,378
  • 40
  • 442
  • 569
  • 2
    `long` in Microsoft's compiler is 32-bit. but `long` is a 64-bit value on Linux/MacOS if the OS is 64-bit. Which could also explain the difference... – Mats Petersson Feb 25 '13 at 23:24
2

If your compiler supports it (GCC does) then use a union. This is undefined behavior according to the C++ standard (strict aliasing rule):

#include <iostream>
int main() {
    union {
        unsigned long long num;
        double fp;
    } pun;

    pun.fp = 1.0;
    std::cout << std::hex << pun.num << std::endl;
}

The output is

3ff0000000000000
amdn
  • 11,314
  • 33
  • 45