0

I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong.

To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).

William Bell
  • 167
  • 2
  • 9
  • There is no way to change the precision of the built-in `float` type, which is an IEEE 754 double-precision floating point number type. So any solution is going to require changing some of the code you're working with. But see http://mpmath.org/ for a library which can do arithmetic/mathematical computations with configurable precision. – kaya3 Jan 26 '20 at 02:04
  • Is this using `numpy` or some other math/array package? – hpaulj Jan 26 '20 at 02:18
  • Tags should reflect the use of packages like `scipy` and `numpy` – hpaulj Jan 26 '20 at 02:43
  • @hpaulj The dependencies are negotiable enough that I can change them if it turns out the problem calls for it. I know we're not supposed to use many tags, so I've added scipy, currently I also have some components written in numpy. Iirc the Decimal package is built with numpy compatibility in mind. – William Bell Jan 26 '20 at 04:50
  • How do you know this is a problem of precision? And have you confirmed that the scipy code will even run with 128-bit floats? – Jan Christoph Terasa Jan 26 '20 at 07:16
  • Are you sure `Decimal` and `numpy` are compatible? And what about the compiled Fortran underlying LSODA? https://stackoverflow.com/q/7770870 – hpaulj Jan 26 '20 at 07:17
  • On the use of `mpmath` with `scipy` https://stackoverflow.com/q/59652764, https://stackoverflow.com/q/59440969 – hpaulj Jan 26 '20 at 16:41
  • @JanChristophTerasa I believe it is a problem of precision because small numbers (~1e-12) go negative for a positive invariant DE, and because every suggestion related to that seems to point to scaling as the likely problem for that kind of issue. Like I said before, the dependencies are negotiable enough that I'd rather find a solution that works with higher precision and give up scipy than keep scipy with standard precision. – William Bell Jan 26 '20 at 18:21

2 Answers2

0

yes, but no. `

The bigfloat package is a Python wrapper for the GNU MPFR library for arbitrary-precision floating-point reliable arithmetic. The MPFR library is a well-known portable C library for arbitrary-precision arithmetic on floating-point numbers. It provides precise control over precisions and rounding modes and gives correctly-rounded reproducible platform-independent results.`

Blockquote

https://pythonhosted.org/bigfloat

You would then need to coerce the builtin float to be bigfloat everywhere, which would likely be non-trivial.

ShpielMeister
  • 1,417
  • 10
  • 23
  • Yeah, I'm aware of the existence of high precision libraries like `bigfloat` and `decimal`, I'm looking for something that will make the coercion everywhere less non-trivial. – William Bell Jan 26 '20 at 18:22
  • @ev-br As I've stated three times, I am open to rewriting to avoid LSODA. – William Bell Jan 26 '20 at 19:41
  • " setcontext() function for changing the current context; however, the preferred method for making temporary changes to the current context is to use Python’s with statement." – ShpielMeister Jan 26 '20 at 20:28
0

LSODA exposed through scipy.integrate is double precision only.

You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity.

EDIT. In the comments, you indicated

As I've stated three times, I am open to rewriting to avoid LSODA

Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.

ev-br
  • 24,968
  • 9
  • 65
  • 78