4

In versions of .net prior to 4.0, if a structure contained only primitive fields, the Equals operator for the struct would only return true if all fields matched precisely. In 4.0, Microsoft changed this behavior so that structures with fields of type Decimal could return true if the corresponding fields held values representing the same numerical quantity, even if they did not match in other details (most notably, for Decimal, values which differed only in trailing zeroes were considered equal). Unfortunately, there are some contexts where this is absolutely disastrous. For example, given two immutable objects X and Y, it should be possible to substitute Y for X if all fields of X precisely match those of Y. For such substitution to be safe, however, it is imperative that the match be precise. If a field in X contains 0.1m, and the corresponding field in Y contains 0.10m, the objects should not be considered equal, because the observable behavior of Y would differ from that of X.

If Microsoft hadn't overridden Equals on those types to mean something other than equivalence (which is what it means for just about every other type), one could safely assume if one object reports that it Equals another, that instances of the latter could be substituted for the former; even given the non-equivalence overrides on Decimal, one could test for equivalence by wrapping those types in a structure. Given that Microsoft no longer allows that method of equivalence testing, what would one have to do to achieve the same result? Would there be any way for an AlternateEqualityComparer<T>.Default property to determine whether a struct defines its own override of Object.Equals [which should be used] or a system-provided one [in which case, if the struct contains any fields which use non-equivalence-based Equals, it should use an equality operator that tests for equivalence]?

Edit I had tested the behavior using a struct that contained floating-point types as well as Decimal; even though Decimal is not a reference type, its existence within the struct caused the auto-generated Equals method not to use binary comparison for any of the fields.

Edit 2 Here's an example program that shows the issue in .net 4.0; I'd read that the behavior had changed since earlier versions of .net, though this program seems to run the same in on 2.0 and 4.0:

struct Test<T1,T2>
{
    public T1 f1;
    public T2 f2;
    public Test(T1 p1, T2 p2)
    {
        f1 = p1;
        f2 = p2;
    }
}

class Program
{
    static void DoCompares<T1,T2>(Test<T1,T2> thing1, Test<T1,T2> thing2)
    {
        Console.WriteLine("{0}/{1}/{2} {3}",
        thing1.f1, thing2.f1, thing1.f1.Equals((Object)thing2.f1),
        thing1.Equals(thing2));
    }
    static void DoTest<T1, T2>(T1 p1a, T1 p1b, T2 p2)
    {
        Test<T1,T2> thing1 = new Test<T1,T2>(p1a,p2);
        Test<T1,T2> thing2 = new Test<T1,T2>(p1b,p2);
        DoCompares(thing1, thing2);
    }
    static void Main(string[] args)
    {
        DoTest(1.0m, 1.00m, 1.0);
        DoTest(1.0m, 1.00m, 1.0m);
        DoTest(1.0 / (1.0 / 0.0), -1.0 / (1.0 / 0.0), 1.0m);
        DoTest(1.0 / (1.0 / 0.0), -1.0 / (1.0 / 0.0), 1.0);
        Console.ReadLine();
    }
}

If the structure contains just type Double, the structure's equality comparer reports structures as distinct if they don't match perfectly. If it contains a field of type Decimal, even though Decimal is a structure which contains no reference fields, then the equality comparer reports them as identical if their numerical values are equal, even if they are not equivalent.

In any case, whether or not .net has ever insisted upon binary matching for Decimal objects, the question would remain: what would be the best way of performing a structure comparison such that equality-comparisons of value types that do not contain reference-type fields are determined using bit-wise equality rather than numerical equality? Is there any way to honor structure types' "explicit" overrides of Equals while using an alternate equality comparer for those whose Equals is auto-generated?

supercat
  • 77,689
  • 9
  • 166
  • 211
  • I'm not sure how you end up in a situation where, whether an input is `0.1m` or `0.10m`, produces different behavior, but aren't you trying to fix the wrong problem? – Damien_The_Unbeliever Nov 18 '12 at 19:53
  • 1
    I 'm also not sure if what you mention is correct. The [current docs](http://msdn.microsoft.com/en-us/library/2dts52z7.aspx) say that *If none of the fields of the current instance and obj are reference types, the Equals method performs a **byte-by-byte** comparison of the two objects in memory.* Bit-identical is about as identical as you can get. Have I misunderstood the question? – Jon Nov 18 '12 at 20:14
  • I've just tested Equals with positive and negative zero and it seems they are bit-wise compared under 4.0 (compiled with VS 2012), i.e. a double negative zero field makes a struct not equal to one with double positive zero field. – Serge Belov Nov 18 '12 at 22:26
  • @Jon: I guess `double` still uses binary comparisons, but `Decimal` doesn't; when I wrote the question I figured since Decimal's struct-equals behavior now mirrors its (unfortunate) `Equals` override behavior, that `double`/`float` would too.. Let me reword the question. – supercat Nov 19 '12 at 03:09
  • @SergeBelov: If the struct contains a field of type `Decimal`, even though that is not a reference type, it won't use binary comparison for floating-point fields. – supercat Nov 19 '12 at 03:20
  • @Damien_The_Unbeliever: Since calling `ToString` on 4.30m yields a different answer from calling it on 4.3m, that would seem a pretty sound basis for saying that substituting one of those values for the other should not be presumed safe. – supercat Nov 19 '12 at 03:36
  • @supercat: You need to provide a minimal example that exhibits this behavior. – Jon Nov 19 '12 at 09:20
  • @supercat: Hmmm, I 'm getting a `false` for the last comparison, but only if `1.0` is a `double` (`1.0f` and `1.0m` compare equal). This is starting to sound like a defect. Interesting. – Jon Nov 19 '12 at 16:27
  • @Jon: What are you doing with 1.0f? My point with the comparison is that the "false" in the last case is what I'd like to see for all cases (meaning that if one instance has positive zero and another has negative zero, or one has 1.0m while another has 1.00m, the instances should remain distinct). IMHO, the "universal" object-equality function should have been defined to use a "strict" definition of equality, recognizing that such behavior would often differ from what was desired of an `==` function (also recognizing that since `Equals` cannot match `==` in every case... – supercat Nov 19 '12 at 16:40
  • ...it would be better to recognize that it *means* something different than to pretend that they should mean the same thing. Among other things, while `Equals` is supposed to define an equivalence relation, `==` does not (given `long l1=1L<<63, l2=l1+1; double d1=l1;`, then `l1 != l2` even though `l1 == d1` and `l2 == d1`). – supercat Nov 19 '12 at 16:49
  • @supercat I believe that Decimal.Equals was never doing bit-wise comparison, [MSDN](http://msdn.microsoft.com/en-US/library/h16625ka%28v=vs.80%29.aspx) gives lots of examples on how it checks for value rather than representation and that's for .NET 2.0. However, your question is relevant and interesting, thanks a lot. – Serge Belov Nov 19 '12 at 22:42
  • @SergeBelov: I don't have the first test program which led me to believe the behavior had changed, but things have probably always worked as they do. The fact that `Equals` rules for a struct member can change depending upon what else is contained in the struct seems 'surprising', though. – supercat Nov 19 '12 at 22:53

0 Answers0