1

If one uses np.linalg.solve to solve linear equations, the result has dtype=float.

While this is ok for "small" integers, larger ones result in wrong results:

import numpy as np
A = np.array([[1000000000000,2000000],[3000000000000,4000000]])
x = np.array([1000000,2000000])
np.linalg.solve(A, A @ x)

results in

array([1000000.      , 1999999.999872])

My question is not, why the error occurs, I am aware of that. I also know that the limit for numpy integers is determined by the C integers.

But rather, is there a way of using/enforcing the use of large(r) integers in np.linalg?

I found this: Stocking large numbers into numpy array , but this won't work in np.linalg.solve:

/usr/lib/python3/dist-packages/numpy/linalg/linalg.py in solve(a, b)
    401     signature = 'DD->D' if isComplexType(t) else 'dd->d'
    402     extobj = get_linalg_error_extobj(_raise_linalgerror_singular)
--> 403     r = gufunc(a, b, signature=signature, extobj=extobj)
    404 
    405     return wrap(r.astype(result_t, copy=False))

TypeError: No loop matching the specified signature and casting
was found for ufunc solve1
steffen
  • 8,572
  • 11
  • 52
  • 90

2 Answers2

2

In this case you probably want to use mpmath instead of numpy. It has linear algebra routines.

Roland Smith
  • 42,427
  • 3
  • 64
  • 94
1

np.linalg.solve is a wrapper around LAPACK:

The solutions are computed using LAPACK routine _gesv.

BLAS/LAPACK is all about floating-point computations! Some integer-based functionality might be supported in commercial implementations (e.g. Intels MKL) and there exists a very limited experimental branch on OpenBLAS.

But numpy supports multiple BLAS/LAPACK-backends and therefore only supports what's supported everywhere. And these are the floating-point based operations.

For the curious, you might read a bit within umath_linalg.c.src where all those types are easy to recognize. Having these typed source-code it's easy to see, why some np.object-based array won't do.

sascha
  • 32,238
  • 6
  • 68
  • 110