27

A while ago I wrote a program which used some factorial functions. I used the long double data type to support "relative" big numbers.

Now, I changed from codeblocks to Visualstudio 2010, I was wondering why my program didn't work any more till I realized after some research that MS has abandonded the long double data type. Is there any special reason for this? To me it looks very like step backwards in terms of technology.

Is there any alternative to use? (I would also be happy with an alternative out of the boost library).

Amro
  • 123,847
  • 25
  • 243
  • 454
Stephan Dollberg
  • 32,985
  • 16
  • 81
  • 107
  • 4
    The 64-bit compiler exclusively uses SSE2, it supports only 64-bit floating point values. Which ends this bit of embarrassment: http://stackoverflow.com/questions/686483/c-vs-c-big-performance-difference/687741#687741 – Hans Passant Aug 19 '11 at 12:43
  • Technically, the amd64 ABI allows use of the x87 FPU, and so does Windows (and probably any other amd64 platform). Therefore, you could use assembly and a class full of operator overloads to implement at least some of the functionality easily. It depends on how many different functions you used. It would be creating a big portability issue though. – doug65536 Jan 27 '13 at 05:10

1 Answers1

21

I'm not sure why you think that long double was "abandoned", as it is part of the C++ Standard and therefore a compliant implementation must, well, implement it.

What they did "abandon" is long double overloads of mathematical functions, and they did this because:

In Win32 programming, however, the long double data type maps to the double, 64-bit precision data type.

which, in turn, along with long double in older VS versions being 80-bit, is because:

FP code generation has been switching to the use of SSE/SSE2/SSE3 instruction sets instead of the x87 FP stack since that is what both the AMD and Intel recent and future chip generations are focusing their performance efforts on. These instruction sets only support 32 and 64 bit FP formats.

Still, that they chose not to support these overloads, even with same-sized double and long double types (both could have been made 64-bit), is a shame because they are also part of the C++ Standard. But, well, that's Microsoft for you. Intently stubborn.

[n3290: 26.8]: In addition to the double versions of the math functions in <cmath>, C++ adds float and long double overloaded versions of these functions, with the same semantics.

However, although these overloads are essentially deprecated in Visual Studio, they are still available, so you should still be able to use them:

The Microsoft run-time library provides long double versions of the math functions only for backward compatibility.


Is there any alternative to use? (I would also be happy with an alternative out of the boost library).

It sounds to me like you have been relying on long double to support a specific range of numeric values, and have consequently run into regression issues when that has changed in a different toolchain.

If you have a specific numeric range requirement, use fixed-range integral types. Here you have a few options:

  • stdint.h - a C99 feature that some C++ toolchains support as an extension;
  • stdint.h - a C99 feature that Boost re-implements as a library;
  • cstdint - a C++0x feature that may be of use if you are writing C++0x code.
Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • 3
    Why not? The x87 FPU supports 80-bit floating points. They could define `long double` be a 80-bit float. – Yakov Galka Aug 19 '11 at 11:15
  • 3
    @ybungalobill: Except the x86 system won't allow you to keep it that way. You could only ever store `long double` in registers if they made it an x87 80-bit representation. – Puppy Aug 19 '11 at 11:20
  • 1
    It's basically what ybungalobill says, from what I read MS did support that 80 bit float - but then abdandoned it - which everybody is still supporting and which I used in my function. And I think that should work on 32 systems, too. (32bit systems are sooooo yesterday anyway) – Stephan Dollberg Aug 19 '11 at 11:23
  • 2
    @DeadMG Huh? See Intel instruction set manual FLD variant with opcode DB /5. – Yakov Galka Aug 19 '11 at 11:23
  • @ybungalobill: as far as I know the x87 instructions are almost never used these days SSE extensions are used instead these days. – Joachim Sauer Aug 19 '11 at 11:31
  • 2
    @Joachim: it's not true. Even microsoft compiler uses the FPU by default. Other compilers (GCC, Intel?) support 80-bit long doubles. Moreover, you could implement a compiler that uses SSE for doubles and floats and fallbacks to the FPU when long double is used. – Yakov Galka Aug 19 '11 at 11:41
  • 2
    Microsoft dropped x87 `long double` in Win32, which makes sense: Win32 is the API introduced with Windows NT, and that ran on MIPS and Alpha too. Of course the Intel compiler can support x87 doubles; portability to non-Intel archs is an antifeature for them. – MSalters Aug 19 '11 at 13:14
  • 6
    You can't just drop `long double`. It's an explicit part of the standard. What they did is to effectively drop support for the `long double` overloads, and they did this because they happen to have decided to map `long double` to `double` (_this_ is the bit that makes sense). Unfortunately for them, the overloads are also an explicit part of the language, so they're playing with fire really. – Lightness Races in Orbit Aug 19 '11 at 13:30
  • thanks tomalak, that last post really made it clear. that is what I meant. I think you guys noticed that I am somehow new to programming. I will check out those links you posted and look for an alternative. thanks – Stephan Dollberg Aug 19 '11 at 13:41
  • 1
    In many compilers for the x86 platform, `long double` was an 80-bit floating-point type which was distinct from a 64-bit `double`. Among its features was the ability to uniquely represent 64-bit integers. I think the question is asking why there no longer seems to be any support for 80-bit types, since even if such a type were padded out to 16 bytes it would still be much more performant than `Decimal`. – supercat Aug 16 '13 at 21:20
  • @LightnessRacesinOrbit long double size isn't explicitly defined in the C++ standard (not even C++14), and Microsoft focuses on C++ standard compliance even at the expense of C standard compliance – Panagiotis Kanavos Dec 12 '14 at 09:51
  • A more reasonable explanation [can be found here](http://forums.codeguru.com/showthread.php?390950-RESOLVED-When-is-80bit-long-double-coming-back) - 64-bit doubles can use the SSE2 registers to execute mutiple operations in parallel, while 80-bit doubles have to use the FPU which only performs one calculation at a time – Panagiotis Kanavos Mar 30 '15 at 11:22
  • @PanagiotisKanavos: The size doesn't have anything to do with compliance. If Microsoft "focused on C++ standard compliance" then they would not have removed support for standard C++ library overloads. That being said, you have found an authoritative declaration of the reason for this choice — it would make a good answer! – Lightness Races in Orbit Mar 30 '15 at 12:14
  • @LightnessRacesinOrbit but they *are* compliant - the C++ standard doesn't specify a specific size not event that is should be larger than double – Panagiotis Kanavos Mar 30 '15 at 12:32
  • 1
    @PanagiotisKanavos: The size doesn't have anything to do with compliance. `double` and `long double` are distinct types (regardless of their size), and the standard specifies library overloads that take `long double`. _Not_ compliant. Period. – Lightness Races in Orbit Mar 30 '15 at 12:33
  • I also found this *recent article*: [The pitfalls of long double](http://info.prelert.com/blog/the-pitfalls-of-long-double). Insufficient testing of `long double` functions and portability issues. The insufficient testing part may be a reason the library functions were abandonded - if you can't fix the *standard* library in time for shipping what do you do ? Ship a broken function or drop it until it's fixed? – Panagiotis Kanavos Mar 30 '15 at 12:37
  • @PanagiotisKanavos: That's fine. All I'm saying is that your assertion that this is standard-compliant is incorrect. – Lightness Races in Orbit Mar 30 '15 at 12:49
  • @PanagiotisKanavos: I would guess the biggest problem with supporting `long double` properly was the lack of a means by which a varargs function prototype can say how it expects to receive floating-point arguments [similar issues exist with `int` vs `long`, though not quite as bad]. What I would have liked to see as a standard would be to say that operations between `float` values promote to `long float` (which could be 32 64, 80, or 128 bits, or--for machines without hardware FPU support--48 or 64 bits including an unpacked 32-bit mantissa),... – supercat Mar 31 '15 at 21:03
  • ... and operations between `double` promote to `long double` (which could be 64, 80, 96, or 128 bits). I would also have liked to have seen a standard means of requesting other floating-point types (e.g. to distinguish between code which wants 80 bits precision, versus code that wants the fastest type for operations between `double`). Even if an implementation only has two numeric types, that doesn't mean a language shouldn't be able to distinguish between cases where intermediate values must be clipped to low-precision, kept as extended, or handled in the fastest way. – supercat Mar 31 '15 at 21:09
  • *"[MS] chose not to support these [long double] overloads"* - *"However, [...] they are still available"* - I'm having a hard time making any sense out of this. You pick a random made-up fact to call Microsoft *"intently stubborn"*, only to move on and remedy it (without taking back the unfounded accusation). If you insist that *"provided for backward compatibility"* and *"supported"* are not the same, I invite you to read the [Long Double](https://msdn.microsoft.com/en-us/library/9cx8xs15.aspx) documentation. – IInspectable Jul 21 '16 at 09:28
  • @IInspectable: It's not really that complicated, nor is it "random". I invite you to read this answer, the pages it has linked to for almost five years (one of which is, um, the same page you just linked to) and the comments! – Lightness Races in Orbit Jul 21 '16 at 10:40
  • I did. You are not making any sense. As usual. – IInspectable Jul 21 '16 at 10:44
  • 1
    @IInspectable: Please feel free to come back with technical arguments rather than personal attacks. Have a nice day. – Lightness Races in Orbit Jul 21 '16 at 10:56
  • 1
    Technical argument: The `long double` overloads are still present in the *cmath* header that ships with Visual Studio 2015. They are [documented](https://msdn.microsoft.com/en-us/library/9cx8xs15.aspx) to behave identically to their `double` counterparts. It is unclear, what you meant to express by claiming that *"[MS] chose to not support these overloads"*. – IInspectable Jul 21 '16 at 11:17
  • 1
    @IInspectable: They are deprecated in VS. The documentation says _"The long double versions of these functions should not be used in new code."_ That they are provided [for now] "only for backward compatibility". How is this "supporting" those standard overloads? – Lightness Races in Orbit Jul 21 '16 at 11:42
  • 4
    *"How is this "supporting" those standard overloads?"* - They are there. They should not be used for *"new code"*. There is, however, no mention, that Visual Studio is going to drop these back compat implementations, and there is no mention, that VS won't build old code. Calling the `long double` overloads doesn't even trigger deprecation warnings. VS is fully standards compliant in this respect, even though you keep hinting that this isn't the case, or may stop to be the case. None of this is justified, and none of this has come true in 5 years. – IInspectable Jul 21 '16 at 11:53
  • 1
    To put your FUD into perspective, take [GetPrivateProfileString](https://msdn.microsoft.com/en-us/library/windows/desktop/ms724353.aspx) for example, an API call that has been *"provided only for compatibility"* for **decades**. There is **zero** indication, that `long double` overloads are going away any time soon. Besides, those are usually called for the wrong reasons anyway (e.g. increased precision, or range). – IInspectable Jul 21 '16 at 11:58
  • 1
    @IInspectable: Thanks for your input. – Lightness Races in Orbit Jul 21 '16 at 12:07