6

From https://en.wikipedia.org/wiki/Long_double:

In C++, long double refers to a floating-point data type that is often more precise than double-precision. However, as with C++'s other floating-point types, it may not necessarily map to an IEEE format.

...

With the GNU C Compiler, long double is 80-bit extended precision on x86 processors regardless of the physical storage used for the type (which can be either 96 or 128 bits). On some other architectures, long double can be double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as the nonstandard type __float128 rather than long double.

With gcc on Linux, 80-bit extended precision is the default; on several BSD operating systems (FreeBSD and OpenBSD), double-precision mode is the default, and long double operations are effectively reduced to double precision.

The Intel C++ Compiler for x86, on the other hand, enables extended-precision mode by default. On OS X, long double is 80-bit extended precision.

It seems like indeed long double may not be an implementation of IEEE's binary128, but why not make this the case? Why defaulting to an 80-bit representation on some cases?

Community
  • 1
  • 1
Eduardo
  • 697
  • 8
  • 26
  • This has little to do with c++, it's a question about specific platforms. – Zereges Oct 11 '18 at 14:44
  • Because why tie the implementations hands? Since the standard only mandates that `sizeof(long double) >= sizeof(double)` then implementations are allowed to provided whatever extended support they can, or none if they don't want to deal with it. – NathanOliver Oct 11 '18 at 14:49
  • 3
    Also note that no where does it even say that `double` need to map to an IEEE format. – NathanOliver Oct 11 '18 at 14:50
  • It looks like you answered your own question as "no". – interjay Oct 11 '18 at 15:04
  • 1
    On Windows x64, `long double` = `double` = IEEE-754 binary64. That ABI chose not to have a type for x87 80-bit extended-precision at all. Other x86-64 systems do expose it as `long double`. So it's not the compiler that matters, it's the target ABI. GCC targeting Windows will follow that ABI. – Peter Cordes Apr 16 '19 at 03:31
  • 3
    @PeterCordes Clang and GCC on Windows do support the extended-precision `long double` by default. You can turn it off with `-mlong-double-64` though. ICC also has the [Qlong-double option](https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-qlong-double) for compatibility with MSVC and GCC – phuclv Apr 18 '19 at 16:26
  • @phuclv: oh weird! I assumed GCC would be careful to maintain ABI compatibility with MSVC like for other types, but I guess basically nobody uses `long double` as part of a struct or in function arg/ret library interfaces. – Peter Cordes Apr 19 '19 at 02:30
  • 1
    @PeterCordes I guess because you don't need to link GCC and MSVC objects together. Issues occasionally happened though, because previously mingw used MS' runtime which doesn't support 80-bit long double so you can't print long doubles with printf [Conversion specifier of long double in C](https://stackoverflow.com/q/1764350/995714). Nowadays they implemented their own runtime so it's not an issue anymore – phuclv Apr 19 '19 at 05:41
  • @phuclv: You do basically need to across DLL boundaries! gcc code has to call into DLLs for the win32 API, and MS builds system DLLs with their own compiler. Or if you want to make a DLL that you can load in Excel or whatever. So yes, as you point out the `long double` ABI difference restricts what you can do with GCC on Windows. If there were any more ABI differences for any more commonly-used types, that would be a huge problem. (But AFAIK there aren't; GCC does follow the Windows calling convention and struct layout rules, and uses the same type widths otherwise, e.g. 32-bit `long`.) – Peter Cordes Apr 19 '19 at 05:52
  • 1
    @PeterCordes I wonder which Windows/MSOffice DLLs ever pass/receive `long double` in their exported functions. I guess that due to `long double` being basically the same as `double` on MS compilers, MS themselves don't even think of using this type anywhere. Well, the C runtime may be the only real case. – Ruslan Dec 15 '19 at 09:22
  • @Ruslan: Yeah, probably only stuff like `sinl(long double)` as far as the standard DLLs. (If it can't just be a symbol alias for `sin`). I don't know Windows APIs in general, but I'd be surprised if they use `long double` anywhere. So you'd only have an issue when building your own DLLs from library sources that do use `long double` in their API. – Peter Cordes Dec 15 '19 at 14:21

2 Answers2

8

Why defaulting to an 80-bit representation on some cases?

Because some platforms may be able to provide efficient 80-bit floating point operations in hardware, but not 128-bit ones. This is the same reasoning for why sizeof(int) is not specified by the standard - on some platforms 32-bit integers might not be efficient/available.

Max Langhof
  • 23,383
  • 5
  • 39
  • 72
4

Is long double in C++ an implementation of IEEE's binary128?

No. C++ doesn't even require the use of IEEE-754 for floating-point types. See

Only since C++11 you can check if the platform uses IEEE-754 with std::numeric_limits<T>::is_iec559


Why defaulting to an 80-bit representation on some cases?

Because x87 supports the 80-bit IEEE-754 extended precision format. Some later platforms like Motorola 6888x and Intel i960 also support that type, so it makes sense for compilers to use it for long double instead of resorting to the much slower software emulation

That's also the reason PowerPC uses double-double for long double by default, because you can utilize the hardware double unit which makes operations much faster. Old NVIDIA CUDA cores didn't have hardware support for double, so many people used float-float for the bigger precision. See Emulating FP64 with 2 FP32 on a GPU

In case of Itanium where floating-point registers are 82-bit wide then long double would highly likely have the same width, with some padding for proper alignment to 128-bit

Most other architectures don't have hardware for floating-point types larger than 64 bits, therefore they chose the IEEE-754 quadruple-precision format for ease of implementation and better forward compatibility, since if one day support for 128-bit floating-point came to real hardware, it'll most likely be IEEE-754 quadruple-precision. Currently Sparc is the only architecture with hardware support for quadruple precision

That said, most compilers have options to change the underlying format for long double though. For example in GCC has -mlong-double-64/80/128 and -m96/128bit-long-double for x86 and -mabi=ibmlongdouble/ieeelongdouble for PowerPC

phuclv
  • 37,963
  • 15
  • 156
  • 475
  • Why doesn't the logic of PowerPC usage of double-double extend to e.g. ARM? GCC on Raspbian (`arm-linux-gnueabihf` — hardware FPU) has `DBL_MANT_DIG==LDBL_MANT_DIG`, which is 53, despite the CPU supporting native `double`. – Ruslan Dec 15 '19 at 09:34
  • @Ruslan double-double arithmetic has different semantics than normal floating-point types, so probably compiler writers don't want to use it for `long double` by default. Besides double precision is usually enough for most practical purposes, so a higher precision `long double` may be not necessary. Nowadays most architectures other than x86 and PPC use the standard IEEE-154 binary128 format for higher precision instead of the non-standard double-double – phuclv Dec 15 '19 at 09:52