4

I am attempting to make an adaptive 'about equal' method (written in C# but the question is general) accepting two doubles and returning a boolean if they are 'about equal' or not. By adaptive, I mean that:

1.234 and 1.235 ==> TRUE

BUT

1.234567 and 1.234599 ==> FALSE

That is, the precission of 'about equal' adapts to the precission of the numbers.

I found a concept of rounding at How do I find if two variables are approximately equals? but there is still the open ended question of what to use for epsilon.

Is anyone aware of best practices for this sort of problem? Thanks in advance!

EDIT: My initial question did not contain enough information as to what I was trying to get. Sorry about that and my apologies. I want a program that would treat higher precission numbers to a higher standard while being more lenient with lower precission numbers. More examples of pairs would be (where '(0)' is an implied zero):

1.077 and 1.07(0) returns false (because 77 is very different from 70)

1.000077 and 1.00007(0) returns false (because 77 is very different from 70)

1.071 and 1.07(0) returns true (because 71 is close to 70

1.000071 and 1.00007(0) returns true (because 71 is close to 70)

Regardless of the implementing code, I assume there will be a 'tolerance' variable of some sort to determine what is 'very different' and what is 'close'.

Community
  • 1
  • 1
chessofnerd
  • 1,219
  • 1
  • 20
  • 40

3 Answers3

5

One way to compare floating point numbers is to compare how many floating point representations that separate them. This solution is indifferent to the size of the numbers and thus you don't have to worry about the "epsilon".

A description of the algorithm can be found here (the AlmostEqual2sComplement function in the end) and here is my C# version of it.

public static class DoubleComparerExtensions
{
    public static bool AlmostEquals(this double left, double right, long representationTolerance)
    {
        long leftAsBits = left.ToBits2Complement();
        long rightAsBits = right.ToBits2Complement();
        long floatingPointRepresentationsDiff = Math.Abs(leftAsBits - rightAsBits);
        return (floatingPointRepresentationsDiff <= representationTolerance);
    }

    private static unsafe long ToBits2Complement(this double value)
    {
        double* valueAsDoublePtr = &value;
        long* valueAsLongPtr = (long*)valueAsDoublePtr;
        long valueAsLong = *valueAsLongPtr;
        return valueAsLong < 0
            ? (long)(0x8000000000000000 - (ulong)valueAsLong)
            : valueAsLong;
    }
}

If you'd like to compare floats, change all double to float, all long to int and 0x8000000000000000 to 0x80000000.

Torbjörn Kalin
  • 1,976
  • 1
  • 22
  • 31
  • I like this approach; however, why can't I just cast the double as a long without using the pointers and addresses? – chessofnerd May 02 '12 at 19:37
  • Casting a double to long just chops off the decimals. What you want in `ToBits2Complemet` is the binary representation (sign, exponent and significand). – Torbjörn Kalin May 02 '12 at 20:11
  • As I implemented this I noticed that high precission numbers were treated differently from low precission numbers. I'm afraid that I asked my initial question rather poorly. Sorry about that! Thanks for a very intriguing (and given my initial question correct) answer! – chessofnerd May 02 '12 at 22:30
  • How about comparing zero and very small negative value. Seems that it does not work. e.g. compare 0 and -1.1102230246251565E-16 – Sebastian Widz Sep 30 '16 at 10:18
4

float and double do not have precision in the way you are thinking. Programs fake precision by truncating trailing zeroes...but you can't make use of this trick for your purposes, since rounding errors will prevent such a trick from being reliable.

decimal does keep track of how many digits to place to the right of the decimal, but this is still worthless for implementing the algorithm you propose, as any an operation which introduces a representational error (e.g., dividing by 3) will tend to max out the the number of digits to the right of the decimal.

If you want to actually have fuzzy equality based on the known precision of your data, one way to do it would be to create your own number class. Something like this:

public class DoublePrecise
{
    public readonly double Error;
    public readonly double Value;
    public DoublePrecise(double error, double value) {
        Error = error;
        Value = value;
    }
    public static DoublePrecise operator -(DoublePrecise d1, DoublePrecise d2) {
        return new DoublePrecise(d1.Value - d2.Value, d1.Error + d2.Error);
    }
    //More stuff.
}

Basically, this is letting your represent numbers like 10.0±0.1. In that situation, you would treat two numbers as being approximately equal if their ranges overlap (though in reality, this would make your equality operation mean, "could be equal" and your inequality operation mean, "definitely not equal."

See also, Interval Arithmetic

Brian
  • 25,523
  • 18
  • 82
  • 173
  • Interesting. Does this mean that there are non-zero values to the right of a double that has just been initialized? It seems like the computer would 'zero out' the memory that I am storing a variable in. It just seems like there should be a way to calculate an epsilon that adapts to the number precision or something as in `return(Math.ABS(d1-d2)<=adaptiveEpsilon)`. Thanks for this feedback. I am relatively inexperienced and knowing how doubles are stored in memory is very helpful! – chessofnerd May 03 '12 at 15:44
  • @chessofnerd: Yes, if you initialize the double to a value that is not representable in base 2. In .Net 4.0, `const double oops = 1.1f; Trace.WriteLine(oops);` outputs `1.10000002384186`. Of course, that number is **still** a lie. The number the program is storing is `1.10000002384185791015625`, as [DoubleConverter](http://www.yoda.arachsys.com/csharp/DoubleConverter.cs) will tell you. On the other hand, `decimal yay = 1.1m` is **exactly** equal to 1.1. – Brian May 03 '12 at 17:20
  • 1
    Read [Binary Floating Point in .Net](http://csharpindepth.com/Articles/General/FloatingPoint.aspx) to learn about how doubles are stored in memory. That specific question is answered in the "What exactly does a floating point number look like in memory?" section, but I recommend reading the article in its entirety. Note that this article is about binary floating points (`float` and `double`). [This companion article](http://csharpindepth.com/Articles/General/Decimal.aspx) is about `Decimal`. – Brian May 03 '12 at 17:32
  • Thankyou! That was a _very_ good article. If I understand it correctly, the data block contains no garbage in it (it is 'zeroed' out) but seemingly innocuous base 10 numbers like 1.1 can't be represented 'nicely' in binary (it has the same problem as 1/3 in base ten). Is that about right? – chessofnerd May 04 '12 at 01:49
0

You could divide one by the other and see if the result is close to one, e.g.

var ratio = a / b;
var diff = Math.Abs(ratio - 1);
return diff <= epsilon;

Then you just have to choose your epsilon to decide how close the values must be.

However, in your examples, your first example is a 0.08% difference, but the second is a 0.003% difference, but you want the first to be true and the second false. I'm not sure what you are really looking for, and you need to know that before you can decide on a solution to the problem. (need the right question before you can get the right answer) You might be imagining something like "the last significant digit can differ, but no more", but defining that in mathematical terms isn't so simple. E.g. should 0.49999999997 be equal to 0.5? What about to 0.500000000003? How does it make sense to compare the numbers?

Tim S.
  • 55,448
  • 7
  • 96
  • 122
  • 2
    I recommend against this algorithm, as it is not symmetric. A safer variation is to always take the ratio between smaller number and higher number (or vice versa). Also note that this algorithm will fail if b is 0, and will return false if a and b are both approximately 0 but on other sides of the 0. – Brian May 03 '12 at 06:48