16

For a university course in numerical analysis we are transitioning from Maple to a combination of Numpy and Sympy for various illustrations of the course material. This is because the students already learn Python the semester before.

One of the difficulties we have is in emulating fixed precision in Python. Maple allows the user to specify a decimal precision (say 10 or 20 digits) and from then on every calculation is made with that precision so you can see the effect of rounding errors. In Python we tried some ways to achieve this:

  • Sympy has a rounding function to a specified number of digits.
  • Mpmath supports custom precision.

This is however not what we're looking for. These options calculate the exact result and round the exact result to the specified number of digits. We are looking for a solution that does every intermediate calculation in the specified precision. Something that can show, for example, the rounding errors that can happen when dividing two very small numbers.

The best solution so far seems to be the custom data types in Numpy. Using float16, float32 and float64 we were able to al least give an indication of what could go wrong. The problem here is that we always need to use arrays of one element and that we are limited to these three data types.

Does anything more flexible exist for our purpose? Or is the very thing we're looking for hidden somewhere in the mpmath documentation? Of course there are workarounds by wrapping every element of a calculation in a rounding function but this obscures the code to the students.

OmerB
  • 4,134
  • 3
  • 20
  • 33
JKaerts
  • 173
  • 4
  • 3
    You may want to take a look at the [decimal module](https://docs.python.org/2/library/decimal.html) (bullet point #5). Also using `numpy`s datatypes is a good idea! I don't see why you have to use them together with arrays. `numpy.float16()` creates you a float of the desired datatype. – a_guest Jan 10 '17 at 09:29
  • 1
    _"These options calculate the exact result and round the exact result to the specified number of digits. We are looking for a solution that does every intermediate calculation in the specified precision."_ A library that allowed itself to visibly lose precision in this way would no doubt be marvellous as a teaching device. It would also be pretty useless for anything else. – Paul Panzer Dec 05 '18 at 21:18
  • @PaulPanzer, another usage (which is what I'm interested in) is to evaluate the effect on my NumPy-based code, which will eventually need to run in a limited precision environment. I have some influence on that environment, so if I can experiment with various values I could select the optimal one. – OmerB Dec 09 '18 at 07:50
  • I don't know. I'd still expect any library worth its salt to return results reasonably close to machine precision. In a sense that is the art of numerical anaylsis. – Paul Panzer Dec 09 '18 at 09:04

1 Answers1

9

You can use decimal. There are several ways of usage, for example, localcontext or getcontext.

Example with getcontext from documentation:

>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')

Example of localcontext usage:

>>> from decimal import Decimal, localcontext
>>> with localcontext() as ctx:
...     ctx.prec = 4
...     print Decimal(1) / Decimal(3)
... 
0.3333

To reduce typing you can abbreviate the constructor (example from documentation):

>>> D = decimal.Decimal
>>> D('1.23') + D('3.45')
Decimal('4.68')
Dmitry
  • 2,026
  • 1
  • 18
  • 22
  • 2
    In addition you can create a numpy array of decimal objects. For example: `a = np.array([Decimal(1),Decimal(2)])`. It will *not* be very fast since all operations will essentially fall back to calling python code, but it will use fixed point arithmetic which should be good for demonstrations. I think you should also be able to use *most* numpy functions with these arrays. I just tested `np.dot` which works. – user545424 Dec 05 '18 at 16:30