2

I am solving a large non-linear system of equations and I need a high degree of numerical precision. I am currently using sympy.lambdify to convert symbolic expressions for the system of equations and its Jacobian into vectorized functions that take ndarrays as inputs and return an ndarray as outputs.

By default, lambdify returns an array with dtype of numpy.float64. Is it possible to have it return an array with dtype numpy.float128? Perhaps this requires the inputs to have dtype of numpy.float128?

davidrpugh
  • 4,363
  • 5
  • 32
  • 46
  • Indeed, the output dtype should follow the inputs more or less. Actually, lambdify does not specify any dtype. It really just converts the sympy expression into a python expression. If you apply it to numpy arrays, the usual numpy casting rules will apply. – burnpanck Oct 08 '14 at 13:53
  • If you need to use an **arbitrary-precision** & **fast** computation engine ( both limited just by your hardware ) give a try to **`fractions.Fraction`** or **`decimal.Decimal`** rather than to a bit longer but still principally in-exact float-representation of rational numbers. For quantitative comparison of `numpy`-processing peformance of these object classes, kindly read >>> http://stackoverflow.com/a/26248202/3666197 – user3666197 Oct 09 '14 at 01:25

2 Answers2

1

If you need a lot of precision, you can try using SymPy floats, or mpmath directly (which is part of SymPy), which provides arbitrary precision. For example, sympy.Float('2.0', 100) creates a float of 2.0 with 100 digits of precision. You can use something like sympy.sin(2).evalf(100) to get 100 digits of sin(2) for instance. This will be a lot slower than numpy because it is arbitrary precision, meaning it doesn't use machine floats, and it is implemented in pure Python (whereas numpy is written in Fortran and C).

asmeurer
  • 86,894
  • 26
  • 169
  • 240
  • Thanks. I was aware of the ability to gain arbitrary precision using mpmath but don't think I can take the performance hit. – davidrpugh Oct 13 '14 at 03:20
0

The output just reflects the input:

from numpy import float128
from sympy.abc import x
from sympy.utilities import lambdify

f = lambdify(x, x ** 2)
result = f(float128(2))

result
#>>> 4.0

type(result)
#>>> <class 'numpy.float128'>
Veedrac
  • 58,273
  • 15
  • 112
  • 169
  • Great! Thanks. I suppose this leads to a follow up. If I am running a 64-bit machine, can I gain floating point precision simply by using `numpy.float128` instead of `numpy.float64`? If so what is the tradeoff? – davidrpugh Oct 08 '14 at 16:46
  • 1
    I was surprised to find that eps for float128 in the numpy implementation barely have any advantage over float64: on my machine it's: 1.084202172485504434e-19 (from np.finfo(np.float128).eps). Maybe the OP is better off looking into the mpmath subpackage of sympy? – Bjoern Dahlgren Oct 09 '14 at 20:56
  • 1
    Ah, http://stackoverflow.com/questions/9062562/what-is-the-internal-precision-of-numpy-float128 points out that `float128` actually means `float80` on most computers; the `128` refers to alignment, not precision. – Veedrac Oct 09 '14 at 21:23
  • I also noticed barely any improvement in precision from switching to float128 on my machine. – davidrpugh Oct 13 '14 at 03:21