-1

There's an algorithm sensitive to the precision of the output. Especially, the paper required a real matrix inverse to be performed at 500 decimals precision. I wanted to write a script to check the result with python. However, the largest float data type used in numpy was np.float128, and the deicmal package does not seem to have a matrix inverse function.

I found the post Matrix inverse with Decimal type NumPy which introduce the decimal as an object in the numpy's function. But I saw comments on stack exchange that python transfer the object as float(32) object because of the % operators.

Is there a way for python to perform a matrix inversion at 500 decimal precision?

mkrieger1
  • 19,194
  • 5
  • 54
  • 65
  • You likely don't have enough *input* precision to get a result accurate to 500 decimal places anyway. – user2357112 Mar 06 '23 at 23:54
  • 1
    You can write your own inversion function. This is quite simple. Generally, the difficult part is the optimization and the numerical stability but Decimal will make the code inefficient anyway and the numerical stability is not much a problem when you can tune the precision (as long as the error does not grow exponentially). By the way, mainstream machine do not support 128-bit floats. The largest float supported by mainstream x86/x86-64 processors is 80-bit wide (non-standard). POWER processors support it, but AFAIK not ARM. – Jérôme Richard Mar 07 '23 at 00:03
  • @user2357112 It's kind of an approximation algorithm, the precession of the intermediate step tied to the numerical stability and the output precession, so the input was actually integers but the precession required for the algorithms increases with respect to the precession of the output, at least that's what the paper said. – ShoutOutAndCalculate Mar 07 '23 at 00:04
  • You can certainly *store* numbers with that kind of precision, but you aren't going to get Numpy-tier optimized matrix inversions without a lot of custom work (probably involving Python bindings for GMP) - and even then it might not be as fast as you expect. Working in integers doesn't solve the problem; it just shifts the decimal point. Numpy is designed to parallelize computations with types that are natively supported by the machine; yours presumably doesn't have, say, a 2048-bit native type, even on the GPU or other custom hardware. – Karl Knechtel Mar 07 '23 at 00:18
  • "and the deicmal package does not seem to have a matrix inverse function." - Well, yes; the `decimal.Decimal` standard library class represents *individual values*, not matrices. – Karl Knechtel Mar 07 '23 at 00:20
  • @KarlKnechtel, what's the point to closing with that duplicate? All says is use `Decimal` for individual numbers. I would have suggested `mpmath`. But in any case, that doesn't help with matrix inversion, which is a non trivial calculation, and will be extremely slow – hpaulj Mar 07 '23 at 00:29
  • 1
    What size of matrix are talking about. If small enough you might provide an example. And since you are working from some sort of paper, you might reference that. I don't know if it's relevant, but numpy's inverse is actually a `solve` for a linear equation: `dot(Ainv, A) = I`, or `Ainv = I\A`. There might a lot more math theory behind this problem than just a matter of precision. – hpaulj Mar 07 '23 at 01:26
  • 1
    I see that `mpmath` has a `Matrix`. Apparently you can create a matrix (actually a `dict`), and ask for its negative poser `A^^-1`. I don't know how practical this is for large matrices. https://mpmath.org/doc/current/matrices.html – hpaulj Mar 07 '23 at 01:32
  • @hpaulj It's increasing with the precession of the output. The size of the matrix went from 100 to 2000, with the precession required from 50 to 500. – ShoutOutAndCalculate Mar 07 '23 at 03:04

1 Answers1

1

Following https://mpmath.org/doc/current/matrices.html#advanced-methods, I imported mpmath, and created (3,3) matrix:

In [289]: A =mpmath.matrix([[1,3,4],[0,1,2],[3,0,1]])

In [290]: A
Out[290]: 
matrix(
[['1.0', '3.0', '4.0'],
 ['0.0', '1.0', '2.0'],
 ['3.0', '0.0', '1.0']])

And calculated its inverse:

In [291]: A**-1
Out[291]: 
matrix(
[['0.142857142857142857142857142857', '-0.428571428571428571428571428571', '0.285714285714285714285714285714'],
 ['0.857142857142857142857142857143', '-1.57142857142857142857142857143', '-0.285714285714285714285714285714'],
 ['-0.428571428571428571428571428571', '1.28571428571428571428571428571', '0.142857142857142857142857142857']])

The equivalent numpy operation:

In [293]: M = np.array([[1,3,4],[0,1,2],[3,0,1]])
In [294]: M
Out[294]: 
array([[1, 3, 4],
       [0, 1, 2],
       [3, 0, 1]])
In [295]: np.linalg.inv(M)
Out[295]: 
array([[ 0.14285714, -0.42857143,  0.28571429],
       [ 0.85714286, -1.57142857, -0.28571429],
       [-0.42857143,  1.28571429,  0.14285714]])

And the comparative times:

In [296]: timeit np.linalg.inv(M)
17.2 µs ± 31 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

In [297]: timeit A**-1
1.19 ms ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

In this case I have the dps (display precision) set at 30 (the default is 15, similar to a float64

In [298]: mpmath.mp.dps
Out[298]: 30

SO posters often worry about the dot of inverse with array doesn't quite produce the identity matrix. But notice how small the near 0 terms are:

In [299]: Out[295]@M
Out[299]: 
array([[ 1.00000000e+00, -5.55111512e-17, -5.55111512e-17],
       [ 1.11022302e-16,  1.00000000e+00,  3.88578059e-16],
       [ 0.00000000e+00,  0.00000000e+00,  1.00000000e+00]])

And to 30 dps:

In [301]: Out[291]*A
Out[301]: 
matrix(
[['1.0', '-7.22223729145213444895991728469e-35', '-4.81482486096808963263994485646e-35'],
 ['1.44444745829042688979198345694e-34', '1.0', '3.37037740267766274284796139952e-34'],
 ['-2.40741243048404481631997242823e-35', '9.62964972193617926527988971292e-35', '1.0']])
hpaulj
  • 221,503
  • 14
  • 230
  • 353