1

I'm calculating with variables by multiplicating etc and I noticed this strange behavior.

If I use these calculations:

CD = 6
CDR = 0.4

CD = float(CD) - (float(CDR) * float(CD))

Theoretically that would be 6 - (6 * 0.4) = 6 - 2.4 = 3.6 if I then print(CD) it prints

3.5999999999999996

Is there a reason for this which I can avoid? Is there a way, like math.ceil to round up the number but to a certain decimal, for example to x.xxxxx (5th deciamal)?

(Let me know if I did anything wrong in this post, I find answers on this site since a while but have never posted before so I maybe did something wrong, apologies in advance.)

Cpt. Pineapple
  • 139
  • 2
  • 11
  • 3
    http://docs.python.org/2/tutorial/floatingpoint.html its just the nature of the beast that is floating points ... it happens in nearly every language.. tou round it to 5th decimal `"%0.5f"%my_float` – Joran Beasley Jan 09 '14 at 21:03
  • 1
    Also worth a read: [Is JavaScript's Floating-Point Math Broken?](http://stackoverflow.com/questions/588004/is-javascripts-floating-point-math-broken) – Lukas Graf Jan 09 '14 at 21:07
  • There is a format command in python which can create an illusion of the float being rounded off. – abhi Jan 09 '14 at 21:10

3 Answers3

2

you can try the decimal module, but under the hood your answer is still "correct". It's just how floating point numbers convert to decimal representations.

mhlester
  • 22,781
  • 10
  • 52
  • 75
  • 1
    Note that `decimal` is also a floating point format (just not *binary* floating point, and unlike `float` it permits arbitrarily many digits) and accordingly also has round-off errors and the like. It's just that decimal's errors match the errors we humans are used to, so it's a bit more intuitive. –  Jan 09 '14 at 21:11
  • @delnan +1 Using `Decimal` basically eliminates one class of floating point errors. It takes care of the most common problem, which is inaccuracy introduced from base conversion. You are of course correct though that it doesn't take care of order of magnitude related issues that arise when performing arithmetic with operands of significantly differing orders of magnitude, or with significant digits within a single operand separated by many orders. – Silas Ray Jan 09 '14 at 21:20
  • 1
    @SilasRay Depends on what exactly you mean by "inaccuracy from base conversion". I assume you are referring to numbers originally written as decimal strings (such as "1.134"), in which case you're right, though I would hesitate to call that the most common problem. On the other hand, it doesn't solve the problem for constants that aren't given as decimal strings, such as many ratios. For example, 1/3 can't be represented in base 2 nor base 10, for exactly the same reasons. This is on top of the other issues you mention. But yes, decimal is often preferable. It's just no panacea. –  Jan 09 '14 at 21:36
  • @delnan True enough on the ratio issue, I should have thought of that. :) – Silas Ray Jan 09 '14 at 21:44
1

You're running in to floating point arithmetic problems. Trying using decimal.Decimal instead of float.

Silas Ray
  • 25,682
  • 5
  • 48
  • 63
0

If it's for display only (or piece of mind) you can do

x = math.ceil(x*100000.0) / 100000.0

However there's no guarantee that the will be a number that can be represented exactly in memory either (ending up with the same 3.599999999..)

Sorin
  • 11,863
  • 22
  • 26