1

I premise that I have already tried to find an answer to my question online, but I find difficult to even formulate the question in a correct way so here I am:

Anyone can explain to me why the following lines of code return two different results?

print(9.5*0.1)
print(0.95*1)

0.9500000000000001

0.95

I immagine there will be some floating point arithmetic reason behind that but I'm really curious since I made pretty much the same operation with (almost) the same numbers, and I was wondering why the two different formats

paolopazzo
  • 277
  • 2
  • 14

1 Answers1

1

It all depends on how floats are represented in the computer:

Floating point representation defines an important attribute- resolution. The resolution defines how "accurate" you can represent certain numbers and as such errors such as the above can occur. 0.95 and 9.5 have different resolutions and therefore the accuracy of 0.95*1 which simply returns the representation of 0.95 yields a better result. You can read more about it here: https://en.wikipedia.org/wiki/Floating-point_arithmetic#:~:text=Floating%2Dpoint%20representation%20is%20similar,significand%2C%20mantissa%2C%20or%20coefficient.

Tamir
  • 1,224
  • 1
  • 5
  • 18
  • so if I understood correctly its because 0.1 in the first case doesn't have a correct binary representation (0.1000000000000000055511151231257827021181583404541015625 for what I read from the answer), while obviously 1 has it since its 2^0, am I correct? – paolopazzo Mar 05 '22 at 16:23
  • 1
    Yes this is the basic idea behind what you got. – Tamir Mar 05 '22 at 16:29