5

I just play around with Python and found just interesting thing: my computer (i5, 3 GHz) just hang out after several hours of attempting to compute 10 ** 10 ** 10. I know Math isn't a purpose Python was created for, but I wonder isn't there a way to help Python to compute it.

What I have so far is my observation: n ** (2** lg(n**n)) works 2 times faster then n ** n ** n

n = 8 ** (8 ** 8)
n2 = 8 ** (2 ** 24)
# measured by timeit
> 4.449993866728619e-07
> 1.8300124793313444e-07

1) Does anyone have an idea how to solve n ** n ** n in a most sophisticated way?

2) Do generators can help in order to minimize memory abuse?

Rudziankoŭ
  • 10,681
  • 20
  • 92
  • 192

2 Answers2

13

10 ** 10 ** 10 is a very very large number. Python is trying to allocate enough memory to represent that number. 10.000.000.000 (10 billion) digits takes a lot more memory than your computer can provide in one go, so your computer is now swapping out memory to disk to make space, which is why things are now so very very slow.

To illustrate, try using sys.getsizeof() on some numbers that do fit:

>>> import sys
>>> sys.getsizeof(10 ** 10 ** 6)
442948
>>> sys.getsizeof(10 ** 10 ** 7)
4429264

so an additional digit requires roughly 10 times more memory. The amounts above are in bytes, so a 1 million digit number takes almost half a megabyte, 10 million digits takes 4 megabytes. Extrapoliting, your number would require 4 gigabytes of memory. It depends on your OS and hardware if Python will even be given that much memory.

Python stores integers in increments of 30 bits on modern platforms; so every 30 bits requires an additional 4 bytes of storage. For 10 billion digits that comes down to (log2(10 ** 10 ** 10) / 30 * 4) / (1024 ** 3) == about 4.125GiB.

You can't use Python to represent numbers this large. Not even floating point numbers can reach that high:

>>> 10.0 ** 10 ** 10
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')

I'm not that familiar with bignum (big number) handling in Python; perhaps the gmpy libray has facilities to represent such numbers that are better.

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
  • Almost 4GB of memory if I [calculated](http://www.wolframalpha.com/input/?dataset=&i=log2(10%5E10%5E10)+%2F+8+%2F+1024+%2F+1024+%2F+1024) correctly. – arekolek Mar 22 '16 at 16:02
  • @arekolek: added a more accurate calculation. It is *more* than 4GB because only 30 bits of every 4 bytes are used. – Martijn Pieters Mar 22 '16 at 16:13
  • To give you a feel for how big `10**10**10` is, the estimated amount of ordinary matter in the observable universe is "only" `10**10**1.72427` kg. – chepner Mar 22 '16 at 16:44
  • 1
    @MartijnPieters `gmpy2` should be able to calculate the value on a 64-bit Linux system with 16+ GB of RAM. I won't be able to test the actual memory requirements until later. Unfortunately, the currently available version of `gmpy2` for Windows fails. – casevh Mar 22 '16 at 16:45
  • 1
    `gmpy2` requires ~10 GB of RAM and ~4 minutes to calculate the exact value. I did a quick estimate of the running time for Python to calculate the exact value using the native long type - about a week. – casevh Mar 23 '16 at 05:31
5

If integer precision is not of utmost importance, you could use float numbers

>>> 3**3**3
7625597484987
>>> 3.**3.**3.
7625597484987.0

However, for larger values those will quickly reach their limits:

>>> 5.**5.**5.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: (34, 'Numerical result out of range')

You can get much higher with decimal:

>>> import decimal
>>> d = decimal.Decimal
>>> d(5)**d(5)**d(5)
Decimal('1.911012597945477520356404560E+2184')
>>> d(10)**d(10)**d(8)
Decimal('1.000000000000000000000000000E+100000000')

By default, even those can not represent 10**10**10:

>>> d(10)**d(10)**d(10)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/decimal.py", line 2386, in __pow__
    ans = ans._fix(context)
  File "/usr/lib/python2.7/decimal.py", line 1676, in _fix
    ans = context._raise_error(Overflow, 'above Emax', self._sign)
  File "/usr/lib/python2.7/decimal.py", line 3872, in _raise_error
    raise error(explanation)
decimal.Overflow: above Emax

But those limits are not fixed. Using getcontext() you can make them as big as you want:

>>> decimal.getcontext().Emax = 1000000000000
>>> d(10)**d(10)**d(10)
Decimal('1.000000000000000000000000000E+10000000000')

But remember that those numbers are not 100% precise to the last digit (your computer probably does not even have enough memory to store each digit), so don't be surprised if this happens:

>>> d(10)**d(10)**d(10) == d(10)**d(10)**d(10) + 1000000
True
tobias_k
  • 81,265
  • 12
  • 120
  • 179