37

Is there a difference in double size when I run my app on 32 and 64 bit environment?

If I am not mistaken the double in 32 bit environment will take up 16 digits after 0, whereas the double in 64 bit will take up 32 bit, am I right?

Graviton
  • 81,782
  • 146
  • 424
  • 602

3 Answers3

73

No, an IEEE 754 double-precision floating point number is always 64 bits. Similarly, a single-precision float is always 32 bits.

If your question is about C# and/or .NET specifically (as your tag would indicate), all of the data type sizes are fixed, independent of your system architecture. This is the same as Java, but different from C and C++ where type sizes do vary from platform to platform.

It is common for the integral types to have different sizes on different architectures in C and C++. For instance, int was 16 bits wide in 16-bit DOS and 32 bits wide in Win32. However, the IEEE 754 standard is so ubiquitous for floating-point computation that the sizes of float and double do not vary on any system you will find in the real world--20 years ago double was 64 bits and so it is today.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
  • 8
    It's worth noting that that CLR (c#) and JVM are bytecode VM's, which must be portable across architectures (by design, there are counterexamples), which explains why the types are the same regardless of host. C/C++ are oriented toward machine code compilation, and therefore typically have ABI differences to make optimal use of their target architectures. This explains the difference. – SingleNegationElimination Jul 09 '09 at 02:19
  • @IfLoop. Great addition here. Very helpful towards explaining the "why" as it relates to bytecode and VMs. – Scott Saad Oct 02 '14 at 18:22
10

In c# double is always 8 bytes (64 bits)

Learner
  • 2,556
  • 11
  • 33
  • 38
7

It doesn't change.

A simple way to check for this is writing a simple console app with

Console.WriteLine(Double.MaxValue);

and compiling to both x86 and x64.

statenjason
  • 5,140
  • 1
  • 32
  • 36