38

What precision does numpy.float128 map to internally? Is it __float128 or long double? Or something else entirely?

A potential follow on question if anybody knows: is it safe in C to cast a __float128 to a (16 byte) long double, with just a loss in precision? (this is for interfacing with a C lib that operates on long doubles).

Edit: In response to the comment, the platform is 'Linux-3.0.0-14-generic-x86_64-with-Ubuntu-11.10-oneiric'. Now, if numpy.float128 has varying precision dependent on the platform, that is also useful knowledge for me!

Just to be clear, it is the precision I am interested in, not the size of an element.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Henry Gomersall
  • 8,434
  • 3
  • 31
  • 54
  • "The versions with a number following correspond to whatever words are available on the specific platform you are using which have at least that many bits in them" seems clear. 128 bits. What was confusing about that? It is platform specific and you didn't list a platform, making it impossible to answer your question as asked. Please **update** the question with the exact Python platform information. Hint: there's a `platform` package. – S.Lott Jan 30 '12 at 10:56
  • 2
    "seems clear" -- assuming it also says what happens when no such type is available on the specific platform. – Steve Jessop Jan 30 '12 at 11:16
  • 3
    I've been assuming that the numpy precision is platform independent, so information to the contrary is certainly useful. I *would* assume that float128 maps to something like __float128 internally, but long double is also 128 bits on my system, so it could reasonably be that. – Henry Gomersall Jan 30 '12 at 11:52
  • "assuming that the numpy precision is platform independent"? Why? The documentation is quite clear that it's **not** platform independent. The precision depends on the size of the element. 64 bits is one precision. 128 bits is a different precision. Both are documented in the IEEE floating-point specifications. The question you need to ask is "how do I figure out which size my particular numpy is using?" – S.Lott Jan 30 '12 at 12:22
  • 2
    What? The question is referring to numpy.float128. Does the precision of that change across platforms? I _do_ appreciate that not all platforms offer that dtype, but it's not so silly to assume that those that do, define it the same way. Would you be so good as to point me to docs that might contradict that? [This page](http://docs.scipy.org/doc/numpy/user/basics.types.html) doesn't even refer to float128 (but does nicely define those types it does document). I find it reasonable that the type maps to IEEE 754 quadruple type, and that's what I'm trying to confirm (or not). – Henry Gomersall Jan 30 '12 at 12:32

3 Answers3

57

numpy.longdouble refers to whatever type your C compiler calls long double. Currently, this is the only extended precision floating point type that numpy supports.

On x86-32 and x86-64, this is an 80-bit floating point type. On more exotic systems it may be something else (IIRC on Sparc it's an actual 128-bit IEEE float, and on PPC it's double-double). (It also may depend on what OS and compiler you're using -- e.g. MSVC on Windows doesn't support any kind of extended precision at all.)

Numpy will also export some name like numpy.float96 or numpy.float128. Which of these names is exported depends on your platform/compiler, but whatever you get always refers to the same underlying type as longdouble. Also, these names are highly misleading. They do not indicate a 96- or 128-bit IEEE floating point format. Instead, they indicate the number of bits of alignment used by the underlying long double type. So e.g. on x86-32, long double is 80 bits, but gets padded up to 96 bits to maintain 32-bit alignment, and numpy calls this float96. On x86-64, long double is again the identical 80 bit type, but now it gets padded up to 128 bits to maintain 64-bit alignment, and numpy calls this float128. There's no extra precision, just extra padding.

Recommendation: ignore the float96/float128 names, just use numpy.longdouble. Or better yet stick to doubles unless you have a truly compelling reason. They'll be faster, more portable, etc.

Nathaniel J. Smith
  • 11,613
  • 4
  • 41
  • 49
14

It's quite recommended to use longdouble instead of float128, since it's quite a mess, ATM. Python will cast it to float64 during initialization.

Inside numpy, it can be a double or a long double. It's defined in npy_common.h and depends of your platform. I don't know if you can include it out-of-the-box into your source code.

If you don't need performance in this part of your algorithm, a safer way could be to export it to a string and use strold afterwards.

Coren
  • 5,517
  • 1
  • 21
  • 34
  • 2
    Does it lie correctly in memory to cast a pointer to a float128 array to long double? It *is* performance critical ;) – Henry Gomersall Jan 30 '12 at 12:49
  • 1
    Further to that, reading npy_common.h, it seems to imply the it's sensitive to the platform dependent length of long double (i.e. it uses long double if long double is 128 bits), but my mental C preprocessor is a bit flakey. – Henry Gomersall Jan 30 '12 at 12:52
  • Ok, I've largely answered those questions I think. I *can* cast to long double and everything works as expected. The references above would suggest that float128, if it exists, is defined as a long double, but I'm not sure about this. – Henry Gomersall Jan 30 '12 at 13:32
  • Further to all those comments, numpy.longdouble is _defined_ on my platform as numpy.float128, which allows me to work entirely in terms of longdouble, which is always well defined. – Henry Gomersall Jan 30 '12 at 13:47
  • @user4815162342 your "down" link is down. – markroxor Aug 27 '18 at 05:44
  • re down link. i guess the mailing lists changed url. so now we have to find the new url somehow. – Trevor Boyd Smith Aug 14 '19 at 20:04
5

TLDR from the numpy docs:

np.longdouble is padded to the system default; np.float96 and np.float128 are provided for users who want specific padding. In spite of the names, np.float96 and np.float128 provide only as much precision as np.longdouble, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds.

SuperStormer
  • 4,997
  • 5
  • 25
  • 35