0

I'm making a code to convert binary numbers between 0 and 1 to decimal. I made the code, tested with 0.1(equivalent to 0.5 in decimal) and it worked. When I tested it with 0.01 and 0.001 I was given the wrong answers(albeit close). I went to python tutor and found that when going for the second iteration, it would fail to transform 0.1 float into string. it would return "0.09999999999999964". Is there another way to make this conversion?

This is an algorithm from numeric method conversion.

  • 1
    Read about numerical representations, and specifically about floating point. You can't expect there to be a mathematically-perfect representation of all numbers, so you're going to have to define what you want to do with rounding errors and explicitly deal with them. – ShlomiF Jul 06 '19 at 22:04
  • This thread will answer all your questions https://stackoverflow.com/questions/21895756/why-are-floating-point-numbers-inaccurate – cymruu Jul 06 '19 at 23:34

1 Answers1

2

The error is caused by floating point rounding errors. You can choose to round your strings using format:

str(0.1 + 0.2)
# => ''0.30000000000000004'

'{:.10f}'.format(0.1 + 0.2)
# => '0.3000000000'

The format string .10f tells format that you want a float with 10 digits of precision.


Alternatively you can use more exact representations of numbers such as decimal

from decimal import Decimal
str(Decimal('0.1') + Decimal('0.2'))
# => '0.3'

Notice how 0.1 and 0.2 are put in quotes to make them strings so they won't be converted into float and be misrounded.

Alex
  • 963
  • 9
  • 11