18

I checked the size of a pointer in my python terminal (in Enthought Canopy IDE) via

import ctypes
print (ctypes.sizeof(ctypes.c_voidp) * 8)

I've a 64bit architecture and working with numpy.float64 is just fine. But I cannot use np.float128?

np.array([1,1,1],dtype=np.float128)

or

np.float128(1)

results in:

AttributeError: 'module' object has no attribute 'float128'

I'm running the following version:

sys.version_info(major=2, minor=7, micro=6, releaselevel='final', serial=0)
Matthias
  • 4,481
  • 12
  • 45
  • 84
  • 2
    @Matthias: Unless you've got a very unusual platform (e.g., IBM mainframe), NumPy almost certainly doesn't give you access to true 128-bit floats. On some platforms, NumPy supports the x87 80-bit floating-point format defined in the 1985 version of the IEEE 754 standard, and on some of *those* platforms, that format is reported as `float128` (while on others it's reported as `float96`). But all that's going on there is that you have an 80-bit format with 48 bits (or 16 bits) of padding. – Mark Dickinson Apr 23 '15 at 11:30
  • @PadraicCunningham `np.longdouble` results in `np.float64` – Matthias Apr 23 '15 at 11:32
  • http://stackoverflow.com/questions/9062562/what-is-the-internal-precision-of-numpy-float128 – Padraic Cunningham Apr 23 '15 at 11:33
  • @PadraicCunningham the exact size does not really matter as long as I have a higher precision than a float64 (for comparing quadrature rules) – Matthias Apr 23 '15 at 11:34
  • 1
    @Matthias: Then you're probably out of luck. Are you on Windows? IIRC, the Windows platform defines `long double` to be the same type as `double`, so `np.longdouble` doesn't give you any extra precision. – Mark Dickinson Apr 23 '15 at 11:35
  • @MarkDickinson yes idd. windows – Matthias Apr 23 '15 at 11:38

2 Answers2

5

Update: From the comments, it seems pointless to even have a 128 bit float on a 64 bit system.

I am using anaconda on a 64-bit Ubuntu 14.04 system with sys.version_info(major=2, minor=7, micro=9, releaselevel='final', serial=0)

and 128 bit floats work fine:

import numpy
a = numpy.float128(3)

This might be an distribution problem. Try:

EDIT: Update from the comments:

Not my downvote, but this post doesn't really answer the "why doesn't np.float128 exist on my machine" implied question. The true answer is that this is platform specific: float128 exists on some platforms but not others, and on those platforms where it does exist it's almost certainly simply the 80-bit x87 extended precision type, padded to 128 bits. – Mark Dickinson

shaunakde
  • 3,009
  • 3
  • 24
  • 39
  • 3
    That's almost certainly *not* a 128-bit float, at least not in the sense of the IEEE 754 binary128 format. It's an 80-bit float with 48 bits of padding. – Mark Dickinson Apr 23 '15 at 11:28
  • Referring to the Bumpy user manual - on some platforms it will happily let you declare a float 256. Its internal implementation needs to be checked – shaunakde Apr 23 '15 at 11:33
  • 3
    Try doing `numpy.float128(1) + numpy.float128(2**-64) - numpy.float128(1)`. I suspect you'll get an answer of `0.0`, indicating that the `float128` type contains no more than 64 bits of precision. – Mark Dickinson Apr 23 '15 at 11:46
  • @MarkDickinson - You are correct. It does not actually store a 128 bit float. Or at-least not on my system. – shaunakde Apr 23 '15 at 12:03
  • @MarkDickinson is this to be expected when using 64 float with a 64 bit computer? `>>> np.float64(1) + np.float64(2**-64) - np.float64(1) = 0.0`, seems odd, no? or is it just at the boundary of its precision? – Charlie Parker Nov 01 '17 at 17:21
  • 1
    @CharlieParker: Yes, absolutely expected. In normal double precision, `1.0 + 2**-64` is not exactly representable (not enough significand bits), so the result of the addition is the closest double-precision float which _is_ exactly representable, which is `1.0` again. And now of course subtracting `1.0` gives `0.0`. And for regular double precision, the same is true with `1.0 + 2**-53 - 1.0` (the binary precision is 53). For extended x87-style precision, with the usual round-ties-to-even, `1.0 + 2**-64 - 1.0` will give zero, while `1.0 + 2**-63 - 1.0` will be nonzero. – Mark Dickinson Nov 01 '17 at 17:43
  • @MarkDickinson hmm, maybe I don't understand something at a fundamental level but then why can we represent `>>> 2**-1000 = 9.332636185032189e-302`? I don't understand why the addition to `1.0` makes things change. – Charlie Parker Nov 01 '17 at 17:46
  • 3
    @CharlieParker: Not my downvote, but this post doesn't really answer the "why doesn't np.float128 exist on my machine" implied question. The true answer is that this is platform specific: `float128` exists on some platforms but not others, and on those platforms where it does exist it's almost certainly simply the 80-bit x87 extended precision type, padded to 128 bits. – Mark Dickinson Nov 01 '17 at 17:47
  • 3
    @CharlieParker: Because floating-point means floating (binary) point! The ability to move the point allows representations of values at a wide range of scales, but doesn't magically give extra precision. See any of the [many](https://docs.python.org/3/tutorial/floatingpoint.html) [floating-point](http://floating-point-gui.de) guides out there for more information. These comments aren't really the right place for this discussion ... – Mark Dickinson Nov 01 '17 at 17:55
  • @MarkDickinson Editing my answer to include the points you made. Thanks! It was informative. Maybe consider answering this question? – shaunakde Nov 02 '17 at 05:24
0

For me, the issue was a Python module that has a problem in Windows (PyOpenGL for those that care). This site has Python wheels with "fixed" versions of many popular modules, to address the float128 issue.


Note: This question has an accepted answer. My answer if for future searchers, since this question is high in Google results for module 'numpy' has no attribute 'float128'.

bobsbeenjamin
  • 193
  • 3
  • 8