17

I'm using the Decimal class for operations that requires precision.

I would like to use 'largest possible' precision. With this, I mean as precise as the system on which the program runs can handle.

To set a certain precision it's simple:

import decimal
decimal.getcontext().prec = 123 #123 decimal precision

I tried to figure out the maximum precision the 'Decimal' class can compute:

print(decimal.MAX_PREC)
>> 999999999999999999

So I tried to set the precision to the maximum precision (knowing it probably won't work..):

decimal.getcontext().prec = decimal.MAX_PREC

But, of course, this throws a Memory Error (on division)

So my question is: How do I figure out the maximum precision the current system can handle?

Extra info:

import sys
print(sys.maxsize)
>> 9223372036854775807
demongolem
  • 9,474
  • 36
  • 90
  • 105
Hades
  • 865
  • 2
  • 11
  • 28
  • 7
    Your system might have memory available for a few objects with precision X but not for many. So there is not a single answer—the maximum precision you can use before running out of memory depends on the calculations you are going to do. – Eric Postpischil Dec 06 '18 at 22:57
  • I tried doing what you did and it doesn't throw any Memory Error and changes the precision. What is your python version? Also, you might want to look at [this answer](https://stackoverflow.com/questions/28081091/what-is-the-largest-number-the-decimal-class-can-handle) – Rick M. Dec 12 '18 at 16:04
  • @RickM. I'm using python 3.6.5 (64bit), have you tried doing a division or something with a infinite non-repeating decimal? – Hades Dec 13 '18 at 09:43
  • @Eli Sure, there you can expect MemoryError but that doesn't mean that the precision isn't set (as Eric Postpischii says). As your question is now, you should add Memory Error on division to make things clearer. – Rick M. Dec 13 '18 at 09:57
  • @RickM. I've edited the question, but the question doesn't change much, as just want a way to go as precise as it can get... – Hades Dec 13 '18 at 10:07
  • You might want to look at [this](https://en.wikiversity.org/wiki/Python_Concepts/Numbers#The_Precision_of_Floats) – Rick M. Dec 13 '18 at 10:16
  • You could try using python bignums, and maybe storing an order of magnitude everything is scaled by separately? – Namyts Dec 13 '18 at 10:43

4 Answers4

17

Trying to do this is a mistake. Throwing more precision at a problem is a tempting trap for newcomers to floating-point, but it's not that useful, especially to this extreme.

Your operations wouldn't actually require the "largest possible" precision even if that was a well-defined notion. Either they require exact arithmetic, in which case decimal.Decimal is the wrong tool entirely and you should look into something like fractions.Fraction or symbolic computation, or they don't require that much precision, and you should determine how much precision you actually need and use that.

If you still want to throw all the precision you can at your problem, then how much precision that actually is will depend on what kind of math you're doing, and how many absurdly precise numbers you're attempting to store in memory at once. This can be determined by analyzing your program and the memory requirements of Decimal objects, or you can instead take the precision as a parameter and binary search for the largest precision that doesn't cause a crash.

user2357112
  • 260,549
  • 28
  • 431
  • 505
  • This is absolutely the correct answer. You need a surprisingly small number of bits to calculate things like the exact distance to Alpha Centauri in millimeters (only about 20 decimal digits, only slightly outside the range of a normal 64-bit integer). I think it's probably safe to say that in any real-world application, your measurement error is going to exceed any numeric error caused by a lack of `Decimal` precision. – Daniel Pryden Dec 17 '18 at 19:38
  • @user2357112, It's not a mistake, it's education. Of course I don't need infinite precision, but what if I discovered a new constant? and would like to release an application that uses precision 'infinitly'. What if I just wanted to find more digits in pi than already found? what if I wanted to test the irrationality of e or mill's constant. What if I could make a program that just goes as precise as possible untill it can't due to memory or whatnot.. NASA only uses 15 digits of pi because it's accurate enough, and yet people still try to get more, here I am trying to make that possible. – Hades Dec 18 '18 at 04:46
  • 2
    @Eli: Throwing the maximum possible precision at the problem doesn't actually help with any of the stuff you've listed. For example, the state of the art in computing pi doesn't involve just throwing precision at an arbitrary-precision floating point implementation; it looks like [this](http://www.numberworld.org/y-cruncher/algorithms.html), and it manages precision based on what a computation actually needs. (Also, it knows how to use disk space intelligently instead of being limited to memory or OS-level swap.) – user2357112 Dec 18 '18 at 08:34
  • As for rationality testing, no amount of precision will let you test the irrationality of e or Mill's constant through direct computation (and I don't think we know how to compute Mill's constant for sure anyway). – user2357112 Dec 18 '18 at 08:35
  • 2
    Trying to push the limits of high-precision computing is a worthy goal, but it doesn't work like this. – user2357112 Dec 18 '18 at 08:38
7

I'd like to suggest a function that allows you to estimate your maximum precision for a given operation in a brute force way:

def find_optimum(a,b, max_iter):
    for i in range(max_iter):
        print(i)
        c = int((a+b)/2)
        decimal.getcontext().prec = c
        try:
            dummy = decimal.Decimal(1)/decimal.Decimal(7) #your operation
            a = c
            print("no fail")
        except MemoryError:
            print("fail")
            dummy = 1
            b = c
        print(c)
        del dummy

This is just halving intervals one step at a time and looks if an error occurs. Calling with max_iter=10 and a=int(1e9), b=int(1e11) gives:

>>> find_optimum(int(1e9), int(1e11), 10)
0
fail
50500000000
1
no fail
25750000000
2
no fail
38125000000
3
no fail
44312500000
4
fail
47406250000
5
fail
45859375000
6
no fail
45085937500
7
no fail
45472656250
8
no fail
45666015625
9
no fail
45762695312

This may give a rough idea of what you are dealing with. This took approx half an hour on i5-3470 and 16GB RAM so you really only would use it for testing purposes.

I don't think, that there is an actual exact way of getting the maximum precision for your operation, as you'd have to have exact knowledge of the dependency of your memory usage on memory consumption. I hope this helps you at least a bit and I would really like to know, what you need that kind of precision for.

EDIT I feel like this really needs to be added, since I read your comments under the top rated post here. Using arbitrarily high precision in this manner is not the way, that people calculate constants. You would program something, that utilizes disk space in a smart way (for example calcutating a bunch of digits in RAM and writing this bunch to a text file), but never only use RAM/swap only, because this will always limit your results. With modern algorithms to calculate pi, you don't need infinite RAM, you just put another 4TB hard drive in the machine and let it write the next digits. So far for mathematical constants.

Now for physical constants: They are not precise. They rely on measurement. I'm not quite sure atm (will edit) but I think the most exact physical constant has an error of 10**(-8). Throwing more precision at it, doesn't make it more exact, you just calculate more wrong numbers.

As an experiment though, this was a fun idea, which is why I even posted the answer in the first place.

user8408080
  • 2,428
  • 1
  • 10
  • 19
  • That's exactly what I could do, calculate X-amount of digits and flush it to hard drive, doesn't even have to be a text file, binary will do fine as well. But the thing is RAM is faster than HDD. So figuring next digits and writing it is more tedious than calculating millions of digits and flushing those to a file. But I'm glad this question is getting some attention. – Hades Dec 18 '18 at 18:27
  • If you are using your RAM as a buffer, before writing to a disk, that's okay – user8408080 Dec 18 '18 at 20:54
2

The maximum precision of the Decimal class is a function of the memory on the device, so there's no good way to set it for the general case. Basically, you're allocating all of the memory on the machine to one variable to get the maximum precision.

If the mathematical operation supports it, long integers will give you unlimited precision. However, you are limited to whole numbers.

Addition, subtraction, multiplication, and simple exponents can be performed exactly with long integers.

Prior to Python 3, the built-in long data type would perform arbitrary precision calculations. https://docs.python.org/2/library/functions.html#long

In Python >=3, the int data type now represents long integers. https://docs.python.org/3/library/functions.html#int

One example of a 64-bit integer math is implementation is bitcoind, where transactions calculations require exact values. However, the precision of Bitcoin transactions is limited to 1 "Satoshi"; each Bitcoin is defined as 10^8 (integer) Satoshi.

The Decimal class works similarly under the hood. A Decimal precision of 10^-8 is similar to the Bitcoin-Satoshi paradigm.

Chris Hubley
  • 311
  • 2
  • 9
1

From your reply above:

What if I just wanted to find more digits in pi than already found? what if I wanted to test the irrationality of e or mill's constant.

I get it. I really do. My one SO question, several years old, is about arbitrary-precision floating point libraries for Python. If those are the types of numerical representations you want to generate, be prepared for the deep dive. Decimal/FP arithmetic is notoriously tricky in Computer Science.

Some programmers, when confronted with a problem, think “I know, I’ll use floating point arithmetic.” Now they have 1.999999999997 problems. – @tomscott

I think when others have said it's a "mistake" or "it depends" to wonder what the max precision is for a Python Decimal type on a given platform, they're taking your question more literally than I'm guessing it was intended. You asked about the Python Decimal type, but if you're interested in FP arithmetic for educational purposes -- "to find more digits in pi" -- you're going to need more powerful, more flexible tools than Decimal or float. These built-in Python types don't even come close. Those are good enough for NASA maybe, but they have limits... in fact, the very limits you are asking about.

That's what multiple-precision (or arbitrary-precision) floating point libraries are for: arbitrarily-precise representations. Want to compute pi for the next 20 years? Python's Decimal type won't even get you through the day.

The fact is, multi-precision binary FP arithmetic is still kinda fringe science. For Python, you'll need to install the GNU MPFR library on your Linux box, then you can use the Python library gmpy2 to dive as deep as you like.

Then, the question isn't, "What's the max precision my program can use?"

It's, "How do I write my program so that it'll run until the electricity goes out?"

And that's a whole other problem, but at least it's restricted by your algorithm, not the hardware it runs on.

Joseph8th
  • 636
  • 5
  • 10