4

I need to store a float variable with 12 bits precision in Python

I know that to convert a variable in float there is a float function but how can I specify the size of the float in bits? e.g. (12, 16, ...)

StarBucK
  • 209
  • 4
  • 18
  • 2
    That's a rather unusual floating-point format. What is its specification? Are you sure it's not 80 bits, or perhaps 80 bits padded to 128 bits? See https://en.wikipedia.org/wiki/Extended_precision#x86_extended_precision_format – PM 2Ring Nov 30 '17 at 12:47
  • @PM2Ring it is because I am working on an embedded card working in float 12, and I would like to simulate datas of analysis. The simple way to do it would be to directly generate 12 bytes data in my python code. – StarBucK Nov 30 '17 at 12:55
  • 1
    I think this is a reasonable question contrary to the close / downvotes although on the surface it looks odd. I initially was tempted to also vote close but checked more closely after realising the OP's background is strong math/physics. As a suggestion for future Qs, maybe put a bit of relevant blurb as you have above / emphasise your own research efforts so readers realise it is a bit more complex. – Alexander McFarlane Nov 30 '17 at 12:56
  • Ok, but as I said, that's an unusual format. And it's even more surprising that an embedded system would use such high precision floats. It won't be easy to work with in Python, but you will need to have its exact specification. – PM 2Ring Nov 30 '17 at 13:02
  • 1
    Even if you just want to create random data & don't need to do any actual arithmetic with these numbers you still need the data specification so you don't create invalid bit patterns. – PM 2Ring Nov 30 '17 at 13:05
  • @AlexanderMcFarlane: If you understand the question, you should edit it to provide more context to everybody else. – Eric Postpischil Nov 30 '17 at 21:22
  • We definitely need more information on the format. It's interesting that IEEE 754-2008 defines a binary interchange format for _every_ width that's a positive multiple of 32, _except_ for 96. So the standard covers `binary32`, `binary64`, `binary128`, `binary160`, `binary192`, etc, but not `binary96`. – Mark Dickinson Dec 01 '17 at 08:29
  • @StarBucK: Are you positive that this isn't a 12-bit format rather than a 12-byte format? I'm finding it very hard to imagine what sort of embedded card would need that kind of precision. – Mark Dickinson Dec 01 '17 at 08:38
  • @MarkDickinson well in fact it is probably a 12 bits. Sorry if it was the confusion but I thought byte and bits are the same things but I just figured out it is not... I just read all the detailed answers now – StarBucK Dec 01 '17 at 10:50
  • If it is **12 bits** then all the answers are wrong. If you mean **12 bytes** (96 bits), then some of the answers are fine. Since you were not entirely sure about the difference between bits and bytes, you might also want to consider using a built-in type after all, which would be the Python float type (8 byte, 64 bit, AFAIK). – Rudy Velthuis Dec 03 '17 at 09:20

3 Answers3

7

As mentioned in other answers, this doesn't really exist in pure python data types, see the docs

However, you can use numpy to specify explicit data types e.g.

  • numpy.float16
  • numpy.float32
  • numpy.float64

You can also use extended precision of numpy.float96 which seems to be what you are after as 12 bytes is 96 bits, for example

import numpy as np
high_prec_array = np.array([1,2,3], dtype=np.float96)

Caveats

As pointed out in comments and links, this isn't true 12 byte accuracy. Rather, 80 bit (10 byte) padded by 2 zero bytes. This may be sufficient if you just care about compatibility.

This precision may not be available on all platforms

In the tables below, platform? means that the type may not be available on all platforms. Compatibility with different C or Python types is indicated: two types are compatible if their data is of the same size and interpreted in the same way.

Also read this about the caveats of using such exotic types

I found this quite illuminating. I would conclude that if you want to absolutely guarantee 96bit precision then python is not the correct choice as the inherent ambiguity in the available extended precision comes from the ambiguity in your C distribution. Given your physics background I would suggest using Fortran if you want to guarantee stability.

Define your own type in C++

For the interested, advanced user, it may be possible to define your own data type. The numpy guide on user defined types states

As an example of what I consider a useful application of the ability to add data-types is the possibility of adding a data-type of arbitrary precision floats to NumPy.

You can therefore try using boost/multiprecision/cpp_bin_float.hpp if you fervently wish to keep your code in python.

Alexander McFarlane
  • 10,643
  • 9
  • 59
  • 100
  • There's no 12-byte floating-point type supported by NumPy. On some platforms it has a type called `float96`, but that's the 80-bit (10-byte) IEEE 754-1985 64-bit precision extended format (1 sign bit, 15 exponent bits, 64 significand bits, no hidden bit) used by Intel x87, and padded up to 12 bytes with two zero bytes. I know this information is implicit in your links, but I think this answer is misleading as it stands. – Mark Dickinson Nov 30 '17 at 14:16
  • Yeah I'm still looking at it - hence why the information is slightly contradictory. It seems quite a complex issue! I think the OP just needs the formatting to be 12 byte for compatibility so even if the last two bytes are padded by two zero bytes it shouldn't matter – Alexander McFarlane Nov 30 '17 at 14:18
  • The comments (especially the last one by StarBucK) seem to suggest he needs 12 **BITS** after all. He just mentioned bytes a few times because he did not know the difference between bits and bytes. I don't think there are many implementations of 12 bit floats in any language. – Rudy Velthuis Dec 02 '17 at 20:09
2

The float type in python is fixed. Often 64 bits, but it is implementation-dependent.

You can use sys.float_info to know the size of floats, but you are not supposed to be able to change it.

https://docs.python.org/3/library/sys.html#sys.float_info

EDIT:

If you really need to specify the float size, you can rely on external libraries, such as numpy. See the very informative answer of Alexander McFarlane for lots of details

Pac0
  • 21,465
  • 8
  • 65
  • 74
  • "float type ... is fixed. Usually 64 bits, but can change" is unclear. Is it fixed or implementation dependent? – chux - Reinstate Monica Nov 30 '17 at 13:47
  • 1
    it is implementation dependant. Within a specific implementation, it won't change. But one implementation could be 64 bits, and another 32 bits. (Edited my answer, thanks for feedback) – Pac0 Nov 30 '17 at 13:48
  • @chux: At least for CPython, it's effectively fixed, in that Python `float`s use C `double`s, and the assumption that `sizeof(double) == 8` is baked into the CPython source. (Which is bad, but doesn't actually seem to have caused any real problems to date.) And IronPython and Jython live on platforms (.NET and Java) that are explicit about Double being IEEE 754 binary64. MicroPython might do something interesting, but I don't know. – Mark Dickinson Nov 30 '17 at 14:38
1

The development version of gmpy2 supports the 96-bit IEEE numeric type.

>>> import gmpy2
>>> gmpy2.version()
'2.1.0a1'
>>> gmpy2.set_context(gmpy2.ieee(96))
>>> gmpy2.get_context()
context(precision=83, real_prec=Default, imag_prec=Default,
        round=RoundToNearest, real_round=Default, imag_round=Default,
        emax=4096, emin=-4175,
        subnormalize=True,
        trap_underflow=False, underflow=False,
        trap_overflow=False, overflow=False,
        trap_inexact=False, inexact=False,
        trap_invalid=False, invalid=False,
        trap_erange=False, erange=False,
        trap_divzero=False, divzero=False,
        allow_complex=False,
        rational_division=False)
>>> gmpy2.mpfr(1)/7
mpfr('0.14285714285714285714285714',83)
>>> 

It is also possible in older versions of gmpy2 but requires a bit more effort.

>>> import gmpy2
>>> gmpy2.version()
'2.0.8'
>>> ieee96 = gmpy2.context(precision=83, emax=4096, emin=-4175, subnormalize=True)
>>> gmpy2.set_context(ieee96)
>>> gmpy2.mpfr(1)/7
mpfr('0.14285714285714285714285714',83)
>>> 

You may need to down to download the source directly from https://github.com/aleaxit/gmpy . Some very early wheels are available at https://pypi.python.org/pypi/gmpy2/2.1.0a1 .

Disclaimer: I maintain gmpy2.

casevh
  • 11,093
  • 1
  • 24
  • 35