-1

When I run the code

count = 0

while count < 1
  count += 0.1
  puts count
end

I would expect

0.1 
0.2 
0.3 
. . . 

I have however been getting

0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.7999999999999999
0.8999999999999999
0.9999999999999999
1.0999999999999999

can anyone help explain this?

Zanaqua
  • 1
  • 1
  • 2
    Search Stack Overflow for questions about floating point precision. There are lots of answers already. Here’s one that I wrote: http://stackoverflow.com/questions/28512650/strange-output-when-using-float-instead-of-double/28512770#28512770 – yellowantphil May 03 '15 at 00:28
  • 2
    I STRONGLY encourage you to read [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – Amxx May 03 '15 at 00:41
  • Hint: don't repeatedly add 0.1 - instead, increment an *integer* by 1 then at each step, multiply by 0.1. You will still have a similar problem, but you'll get *less* rounding error. Have a look at "[*Is floating point math broken?*](https://stackoverflow.com/q/588004/1364007)" for a **very** detailed discussion of your problem. – Wai Ha Lee May 03 '15 at 00:47

3 Answers3

1

Think of it this way:

Your computer only has 32 or 64 bits to represent a number. That means it can only represent a finite amount of numbers.

Now consider all the decimal values between 0 and 1. There is an infinite amount of them. How can you possibly represent all Real Numbers if your machine can't even represent all the numbers between 0 and 1?

The answer is that your machine needs to approximate decimal numbers. This is what you are seeing.

Of course there are libraries that try to overcome these limitations and make it so that you can still accurately represent decimal numbers. One such library is BigDecimal:

require 'bigdecimal'

count = BigDecimal.new("0")
while count < 1
  count += 0.1
  puts count.to_s('F')
end

The downfall is that these libraries are generally slower at arithmetic, because they are a software layer above the CPU doing these calculations.

Martin Konecny
  • 57,827
  • 19
  • 139
  • 159
0

Floating-point numbers cannot precisely represent all real numbers, and floating-point operations cannot precisely represent true arithmetic operations, this leads to many surprising situations.

I advise to read: https://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

You may want to use BigDecimal to avoid such problems.

spickermann
  • 100,941
  • 9
  • 101
  • 131
0

This is one of the many consequences of the representation of floating point number in memory !

To explain what exactly is is happening would be very long, and has other people have already done it better before, the best thing for you would be to read about go read about it elsewhere :

You can also have a look at those previous questions on SO:

Community
  • 1
  • 1
Amxx
  • 3,020
  • 2
  • 24
  • 45