8

How to switch default floating point numbers accuracy to another, maybe custom? I need this, cause I do some calculations and I see that I should increase default float point accuracy.

user983302
  • 1,377
  • 3
  • 14
  • 23
  • 1
    Python and numpy use double precision floating point by default. What makes you believe it has insufficient "accuracy"? – talonmies Nov 18 '12 at 17:24
  • 1
    Have you looked at http://docs.python.org/2/library/decimal.html – Jon Clements Nov 18 '12 at 17:26
  • 1
    You can use `decimal`s for your calculations, but then you lose the performance advantage of `numpy`. To the best of my knowledge you can't get out of this tradeoff - the most precise data type that a processor can work with natively is the one `numpy` uses. – millimoose Nov 18 '12 at 17:26
  • I'm solving differential equation, where I can't use decimal numbers and I stuck with a problem which, as it seems to me, comes from loss in accuracy. – user983302 Nov 18 '12 at 17:32
  • 1
    One option that people sometimes use is to work with log() of the values instead of the original ones. It is worth looking into since you then will not require any libraries. – Bitwise Nov 18 '12 at 18:29

2 Answers2

8

I've recently had to deal with this problem and the mpmath was perfect. It is pure python, under a BSD license.

Mpmath is a pure-Python library for multiprecision floating-point arithmetic. It provides an extensive set of transcendental functions, unlimited exponent sizes, complex numbers, interval arithmetic, numerical integration and differentiation, root-finding, linear algebra, and much more. Almost any calculation can be performed just as well at 10-digit or 1000-digit precision, and in many cases mpmath implements asymptotically fast algorithms that scale well for extremely high precision work

It is not too much slower, and it can leverage the gmpy library if installed (a C-coded Python extension modules that support fast multiple-precision arithmetic).

Zenon
  • 1,481
  • 12
  • 21
  • Using scipy.integrate.odeint I get "TypeError: array cannot be safely cast to required type" =( It seems that scipy.integrate.odeint convert everything to float. – user983302 Nov 18 '12 at 18:38
  • 1
    But it has it's own ode solver - [odefun](http://mpmath.googlecode.com/svn/trunk/doc/build/calculus/odes.html). OK, let this be the solution. – user983302 Nov 18 '12 at 18:52
  • Updated text about [unlimitedness](http://mpmath.org/doc/current/technical.html#representation-of-numbers): "Mpmath uses arbitrary precision integers for both the mantissa and the exponent, so numbers can be as large in magnitude as permitted by the computer’s memory. Some care may be necessary when working with extremely large numbers." – Bob Stein Mar 15 '16 at 15:39
2

If you want greater accuracy I would instead advise you to use the bigfloat package (since this is what it's made for). Alternatively, you can also look into the Decimal class.

arshajii
  • 127,459
  • 24
  • 238
  • 287
  • Is it possible to make scipy.integrate and bigfloat to deal with each other? Currently I get "TypeError: array cannot be safely cast to required type" when pass BigFloat arguments to scipy.integrate.odeint – user983302 Nov 18 '12 at 17:57
  • @user983302 Maybe [this](http://stackoverflow.com/questions/7770870/numpy-array-with-dtype-decimal) will help you. – arshajii Nov 18 '12 at 17:59