-3

I am university student. I am learning python and create some works. Recently, I find below differences.

">>> 0.57*100"

"56.99999999999999"

">>> 0.58*100"

"57.99999999999999"

">>> 0.56*100"

"56.00000000000001"

">>> 0.55*100"

"55.00000000000001"

above code is executed in python interpreter. my python version is 3.7.7. And my python is Cpython.

I multiplied any other two decimal point number such 0.54 ,0.19, 0.99 etc...

But only 0.58 and 0.57 become like 57.999999... or 56.999999999... when multiplied by 100.

And only 0.56*100 and 0.55*100 become like 56.00000000000001 or 55.00000000000001 when multiplied by 100.

All of most two decimal point number such as 0.19 ,0.99 become correct value :19.0, 99.0.

These differences cause only in my computer? or every one computer? And if every python environment has these differences, why happens?

talonmies
  • 70,661
  • 34
  • 192
  • 269

1 Answers1

1

these "mistakes" happens with python and several programming langage. In fact, u cant multiply a float with a int, because this error will occur.

What you can do is to create a var "epsilon" with the value behind the decimal point at which you want your result like 0.001. Then you just have to compare the result with this var and you only keep the figures you want to. (sorry for bad english, tell me if u dont understand this).

gabriel
  • 50
  • 2