So I've understood for a long time that floating point equality is not perfect in any programing language. But until recently after uncovering a bug at work related to this issue, I never realized just how bizarre these situations can be.
Here are some examples:
I understand at a low level why this is happening. But for practical purposes in my application (and I would wager in most applications), we would want all four of the above examples to be true
.
Most solutions I've found involve taking the absolute value of a two variables and adding a precision factor. For example:
var isEqual = Math.Abs(a - b) < 1e-15;
Or for greater than or equal to:
var isAGreaterThanB = (a + 1e-15) >= b;
But I've noticed that a few issues with this:
- The above doesn't necessarily work when comparing
double
andfloat
types - It can be difficult to understand, particularly as part of a larger expression
So my question is, what is the ideal way to determine practical equality for floating point numbers? I'm currently using C#, but would be interested in answers for other common programming languages as well.
My definition of ideal here is as follows:
- Works 100% of the time
- Easy to read/understand
- High performance
Thanks!