2

So I (think I) understand the difference between Float, Double, and Decimal, but one thing that I've wondered about is why there are two sizes of floating binary point, but only one size of floating decimal point?

If I understand the general principle correctly, you'd want to use a float(32-bit) for performance over a double (64-bit) on a 32-bit processor, if you don't need the extra size of the double. On a 64-bit processor, the double should be more performant so this rationale isn't as necessary. But the Decimal type is 128-bits. So why not offer a 64-bit decimal, or even a 32-bit?

Is it just a matter of use cases; no one really needed it? Or is there a technical reason, like you can't accurately present useful decimal ranges with less than 128 bits?

Community
  • 1
  • 1
CodexArcanum
  • 3,944
  • 3
  • 35
  • 40
  • 3
    Why would single-precision floats be faster than double-precision? The memory bus is 64 bit wide in any case, many CPUs these days have seperate FPUs which have sufficently large registers (the x86 has 80 bit floats!), and any speed difference in internal wiring should be tiny (probably less than a machine cycle). A much better reason would be memory size, and even then you'd need millions of numbers for this making a difference. –  Mar 22 '12 at 16:30
  • @delnan the float/double divide was created decades ago when computing hardware was very different. Also consider that SIMD ops like SSE can push twice as many floats through than doubles. – David Heffernan Mar 22 '12 at 17:56
  • @DavidHeffernan Indeed. I'm perfectly aware of that, but OP apparently thinks similar differences exist today. Hence my comment. Good point about SIMD, though that's beyond the level of optimization most people care to do. –  Mar 22 '12 at 17:58
  • @delnan with SIMD differences do still exist – David Heffernan Mar 22 '12 at 17:59
  • I was not aware of what distinctions might have existed or still exist in the hardware, so thank you for the enlightening discussion. – CodexArcanum Mar 22 '12 at 19:23
  • As an interesting epilogue to this question, I recently discovered that the common IEEE-754 standard (floating points) was revised in 2008, and that the new standard defines both binary and decimal floating points. The decimal standard specifies 32, 64, and 128 bit representations, though it does note that 32 is only for certain specialized uses. – CodexArcanum Apr 10 '12 at 21:30

3 Answers3

4

Mandatory link to Eric Lippert's Features start off as unimplemented and only become implemented when people spend effort implementing them.

Why it would not be good idea:

  • any additional numeric type requires a lot of conversions from / to other numeric types, along with rules for which ones are allowed and which ones are not.
  • smaller types (especially 32 bit) would simply not have useful range - i.e. reasonable price range of an item is at least 8 digits 12345.67(0 tenth of lowest money unit).
Alexei Levenkov
  • 98,904
  • 14
  • 127
  • 179
  • While I fully appreciate the sentiment behind Lippert's article, I don't really think "because no one felt like doing it" is really what anyone is looking for as an answer to these kinds of questions. I think he knows it too, because he goes on to explain the reason why they didn't implement that feature. Likewise, thank you for confirming my thought that a 32 or 64 bit decimal type would likely be too small to be of real use, and thus probably isn't worth implementing. – CodexArcanum Mar 22 '12 at 19:25
1

A smaller decimal type would be less capable. Since you don't tend to perform the same type of high volume calculations that are common with floating point data, there seems to be no need for a smaller decimal on grounds of performance.

David Heffernan
  • 601,492
  • 42
  • 1,072
  • 1,490
1

why there are two sizes of floating binary point

Because many programming languages already had separate single- and double-precision floating-point types, including C(++), Java, and Visual Basic. So naturally, C++/CLI, J#, and VB.NET would have these types too.

OTOH, there were no backwards-compatibility reasons to have multiple decimal floating-point types.

If I understand the general principle correctly, you'd want to use a float(32-bit) for performance over a double (64-bit) on a 32-bit processor, if you don't need the extra size of the double.

No. Despite it's name, double has long been the "main" floating-point type, and float a "half-precision" type used for memory optimization. For example, the .NET System.Math class almost always uses double rather than float. And that in all C-derived languages, literals like 1.23 have type double (you need an extra f to get a float).

So why not offer a 64-bit decimal, or even a 32-bit?

With 32-bit (probably 7-digit) decimal, you'd run into rounding errors all the time. Now, there may be situations where you deliberately want to sacrifice precision for the sake of efficiency, but in that case, you might as well use float to get the benefit of hardware support.

dan04
  • 87,747
  • 23
  • 163
  • 198