9

I've been going through a Haskell tutorial recently and noticed this behaviour when trying some simple Haskell expressions in the interactive ghci shell:

Prelude> 1.1 + 1.1 == 2.2
True
Prelude> 1.1 + 1.1 + 1.1 == 3.3
False
Prelude> 1.1 + 1.1 + 1.1 > 3.3
True
Prelude> 1.1 + 1.1 + 1.1
3.3000000000000003

Does anybody know why that is?

Josh Lee
  • 171,072
  • 38
  • 269
  • 275
Frerich Raabe
  • 90,689
  • 19
  • 115
  • 207
  • Nice javascript float - decimal - binary visualizer: http://babbage.cs.qc.edu/IEEE-754/Decimal.html – Seth Jan 10 '10 at 22:20
  • 4
    This is not a Haskell-specific problem: the same thing will happen in any language that uses floating point numbers. The precise details depend on the floating point implementation, but most processors use IEEE-754, which is tightly specified to ensure that programs misbehave in exactly the same way everywhere. – Paul Johnson Jan 12 '10 at 18:58

7 Answers7

37

Because 1.1 and 3.3 are floating point numbers. Decimal fractions, such as .1 or .3, are not exactly representable in a binary floating point number. .1 means 1/10. To represent that in binary, where each fractional digit represents 1/2n (1/2, 1/4, 1/8, etc), you would need an infinite number of digits, 0.000110011... repeating infinitely.

This is exactly the same problem as representing, say, 1/3 in base 10. In base 10, you would need an infinite number of digits, .33333... forever, to represent 1/3 exactly. So working in base 10, you usually round, to something like .33. But if you add up three copies of that, you get .99, not 1.

For far more information on the topic, read What Every Computer Scientist Should Know About Floating Point Arithmetic.

For representing rational numbers more precisely in Haskell, you can always use the rational data type, Ratio; coupled with bignums (arbitrarily large integers, Integer in Haskell, as opposed to Int which are fixed size) as the type for numerator and denominator, you can represent arbitrarily precise rational numbers, but at a significantly slower speed than floating point numbers, which are implemented in hardware and optimized for speed.

Floating point numbers are an optimization, for scientific and numerical computation, that trade off precision for high speed, allowing you to perform a very large number of computations in a small time, as long as you are aware of rounding and how it affects your computations.

qld3303
  • 3
  • 1
  • 3
Brian Campbell
  • 322,767
  • 57
  • 360
  • 340
  • Just a quibble, but `Ratio` is a type constructor - `Rational` is a type, the type (alias) of `Ratio Integer`. – BMeph Jul 09 '10 at 23:17
  • @BMeph: Ah yes, true. I've become more familiar with Haskell in the meanwhile (since I asked this question) and type constructors vs. types no longer confuse me when they have the same name. – Frerich Raabe Feb 28 '11 at 07:24
  • Also note that to construct a Ratio (`Ratio a` where `a` is either `Int` or `Integer`, to be precise), you use `%`, e.g. one half: `1 % 2`. – mk12 Apr 08 '12 at 00:37
14

Because floating-point numbers are not accurate (wikipedia)

YuppieNetworking
  • 8,672
  • 7
  • 44
  • 65
  • 8
    -1 Misleading, sorry. IEEE floating point numbers *are perfectly accurate*, you just have to watch out for rounding errors with decimal numbers. You have the same problems trying to represent fractions like 1/3 in decimal (0.3333 ...). – Seth Jan 10 '10 at 21:27
  • 5
    Agreed. The issue isn't one of accuracy, it's one of representation. Numbers that can be represented completely are completely accurate. Those that lie inbetween those numbers can only be approximated, and _that's_ the issue. – codekaizen Jan 10 '10 at 21:37
  • 17
    So what you two nitpickers are saying is, floating-point numbers are only inaccurate when they're inaccurate? Wow, nothing gets past you people. If `1.1 * 3 != 3.3` in your math system, that is *not* accurate. – Chuck Jan 10 '10 at 21:42
  • 3
    Sure it's accurate... what went wrong was the conversion into binary floating point. So it's accurate, repeatable and predictable, just somewhat surprising. – Andrew McGregor Jan 10 '10 at 21:46
  • @Chuck - There's always the Decimal type (implemented in python et. al.), which "works in the same way as the arithmetic that people learn at school" - http://docs.python.org/library/decimal.html. It's slower though since it's not implemented in hardware. – Seth Jan 10 '10 at 22:25
  • 5
    @Seth: I didn't learn at school that 1/3 is 0.3333333...3 where the number of 3s is finite and depends on the precision that has been set. – sepp2k Jan 10 '10 at 23:25
  • 3
    @sepp2k: Still not exactly an accuracy issue. Just a representation issue because memory is finite. Floating point arithmetic gives the right answer to the wrong problem :D – Thomas Eding Jan 10 '10 at 23:28
  • 1
    Its not accurate. It only has precision. +1. – alternative Jun 08 '11 at 21:57
13

You can avoid floating-point errors in Haskell using rational types:

Prelude Data.Ratio> let a = (11 % 10) + (11 % 10) + (11 % 10)
Prelude Data.Ratio> a > (33 % 10)
False
Prelude Data.Ratio> fromRational a
3.3

Of course, you pay a performance penalty for the increased accuracy.

daf
  • 5,085
  • 4
  • 31
  • 34
  • 8
    You can even write `let a = 1.1 + 1.1 + 1.1 :: Rational`, with no worries about loss of precision (the literal `1.1` is actually an abbreviation for `fromRational (11%10)`). – Reid Barton Jan 14 '10 at 13:17
  • In fact, you can simply write `1.1 + 1.1 + 1.1 < (3.3 :: Rational)` to do what the OP wants exactly. It's rather un-canonical-ish that ghci defaults this calculation to float anyway, when you're not explicitly saying `1.1 + 1.1 + 1.1 < (3.3 :: Float)`. – leftaroundabout Oct 12 '11 at 12:32
6

Looks like a typical floating point error issue.

See What is a simple example of floating point/rounding error?

Community
  • 1
  • 1
micahtan
  • 18,530
  • 1
  • 38
  • 33
6

It has to do with the way IEEE floating point numbers work.

1.1 is represented as 1.1000000000000001 in floating point, 3.3 is represented as 3.2999999999999998.

So 1.1 + 1.1 + 1.1 is actually

1.1000000000000001 + 1.1000000000000001 + 1.1000000000000001 = 3.3000000000000003

Which, as you can see is actually larger than 3.2999999999999998.

The usual workaround is to either not evaluate equality, or to check if a number is within the target +/- a small epsilon (which defines the accuracy you need).

Ex: if both are true, then the sum is "equal" to 3.3 (within the allowed error).

1.1 + 1.1 + 1.1 < 3.3 + 1e9 
1.1 + 1.1 + 1.1 > 3.3 - 1e9
Seth
  • 45,033
  • 10
  • 85
  • 120
  • 9
    No, 1.1 is not represented as 1.1000000000000001. It is represented as 1.0001100110011001100110011001100110011001100110011010 (base 2), which when converted to decimal, rounds to 1.100000000000000001 – Brian Campbell Jan 10 '10 at 22:12
5

Few floats can be expressed exactly using IEEE 754 representation, so they will always be a little off.

Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
1

In general, you shouldn't be comparing floats for equality (for the reasons outlined above). The only reason I can think of is if you want to say "has this value changed?" For example, "if (newscore /= oldscore)" then take some action. That is okay as long as you are not comparing the result of two separate computations to check whether they happen to be equal (because then even mathematically if they are, they might round otherwise).

mgiuca
  • 20,958
  • 7
  • 54
  • 70