It ultimately depends on your application, but I would say generally no.
The problem, very simplified, is that if you calculate: (1/3) * 3
and get the answer 0.999999
, then you want that to compare equal to 1
. This is why we use epsilon values for equal comparisons (and the epsilon should be chosen according to the application and expected precision).
On the other hand, if you want to sort a list of floats then by default the 0.999999
value will sort before 1
. But then again what would the correct behavior be? If they both are sorted as 1
, then it will be somewhat random which one is actually sorted first (depending on the initial order of the list and the sorting algorithm you use).
The problem with floating point numbers is not that they are "random" and that it is impossible to predict their exact values. The problem is that base-10 fractions don't translate cleanly into base-2 fractions, and that non-repeating decimals in one system can translate into repeating one in the other - which then result in rounding errors when truncated to a finite number of decimals. We use epsilon values for equal comparisons to handle rounding errors that arise from these back and forth translations.
But do be aware that the nice relations that ==
, <
and <=
have for integers, don't always translate over to floating points exactly because of the epsilons involved. Example:
- a = x
- b = a + epsilon/2
- c = b + epsilon/2
- d = c + epsilon/2
Now: a == b
, b == c
, c == d
, BUT a != d
, a < d
. In fact, you can continue the sequence keeping num(n) == num(n+1)
and at the same time get an arbitrarily large difference between a
and the last number in the sequence.