39

I was looking at the implementation of compare(double, double) in the Java standard library (6). It reads:

public static int compare(double d1, double d2) {
    if (d1 < d2)
        return -1;       // Neither val is NaN, thisVal is smaller
    if (d1 > d2)
        return 1;        // Neither val is NaN, thisVal is larger

    long thisBits = Double.doubleToLongBits(d1);
    long anotherBits = Double.doubleToLongBits(d2);

    return (thisBits == anotherBits ?  0 : // Values are equal
            (thisBits < anotherBits ? -1 : // (-0.0, 0.0) or (!NaN, NaN)
             1));                          // (0.0, -0.0) or (NaN, !NaN)
}

What are the merits of this implementation?


edit: "Merits" was a (very) bad choice of words. I wanted to know how this works.

DavidS
  • 5,022
  • 2
  • 28
  • 55
  • 1
    Related to this and all the discussion here: check out this bit of JDK horribleness in the area of floating point comparison: http://publicobject.com/2009/11/floating-point-equality.html – Kevin Bourrillion Nov 21 '09 at 17:27
  • @KevinBourrillion - The way I read it, that is actually about the horribleness of the IEE floating point standard. Java has to behave according to the IEE standard, because that is what the hardware on a modern machine implements. – Stephen C Oct 20 '17 at 13:31

4 Answers4

47

The explanation is in the comments in the code. Java has double values for both 0.0 and -0.0, as well as "not a number" (NaN). You can't use simple == operator for these values. Take a peek into the doubleToLongBits() source and at the Javadoc for the Double.equals() method:

Note that in most cases, for two instances of class Double, d1 and d2, the value of d1.equals(d2) is true if and only if

d1.doubleValue() == d2.doubleValue()

also has the value true. However, there are two exceptions:

  • If d1 and d2 both represent Double.NaN, then the equals method returns true, even though Double.NaN == Double.NaN has the value false.
  • If d1 represents +0.0 while d2 represents -0.0, or vice versa, the equal test has the value false, even though +0.0 == -0.0 has the value true.

This definition allows hash tables to operate properly.

shoover
  • 3,071
  • 1
  • 29
  • 40
43

@Shoover's answer is correct (read it!), but there is a bit more to it than this.

As the javadoc for Double::equals states:

"This definition allows hash tables to operate properly."

Suppose that the Java designers had decided to implement equals(...) and compare(...) with the same semantics as == on the wrapped double instances. This would mean that equals() would always return false for a wrapped NaN. Now consider what would happen if you tried to use a wrapped NaN in a Map or Collection.

List<Double> l = new ArrayList<Double>();
l.add(Double.NaN);
if (l.contains(Double.NaN)) {
    // this wont be executed.
}

Map<Object,String> m = new HashMap<Object,String>();
m.put(Double.NaN, "Hi mum");
if (m.get(Double.NaN) != null) {
    // this wont be executed.
}

Doesn't make a lot of sense does it!

Other anomalies would exist because -0.0 and +0.0 have different bit patterns but are equal according to ==.

So the Java designers decided (rightly IMO) on the more complicated (but more intuitive) definition for these Double methods that we have today.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • Thanks, Stephen. I'm marking your answer correct instead of shoover's because I wanted to know why the Java designers implemented compare the way they did (i.e. "the merits"), not just what the code did (although I didn't know that either). My apologies to shoover for not being more definite, but thanks to you both. – DavidS Nov 13 '09 at 16:40
  • Due to floating point accuracy issues, I would expect nothing but problems from code using them as lookup keys. – Sam Harwell Nov 19 '09 at 20:12
  • @280228 - well yes, the keys >>could<< be generated in a way that is insensitive to rounding errors, etc. The point is that if someone did build an application that avoided the problem of floating point errors, they would expect the Collection types to work. – Stephen C Nov 19 '09 at 21:30
  • 3
    @SamHarwell: Suppose a function which takes a `double` is somewhat expensive to compute; depending upon how many unique operand values are expected, such a function could use a `Map` so that invocation with a value for which it has already been computed could simply return the result of that earlier computation. – supercat Jan 15 '15 at 22:48
2

The merit is that it's the simplest code that fulfills the specification.

One common characteristic of rookie programmers is to overvalue reading source code and undervalue reading specifications. In this case, the spec:

http://java.sun.com/javase/6/docs/api/java/lang/Double.html#compareTo%28java.lang.Double%29

... makes the behavior and the reason for the behavior (consistency with equals()) perfectly clear.

Kevin Bourrillion
  • 40,336
  • 12
  • 74
  • 87
  • 1
    Hmm, you're right, I should have read that closer, but you shouldn't assume I undervalue the specifications. I started with the Javadoc for compare(...), then moved on to doubleToLongBits(...), and then checked the Wikipedia article on IEEE 754, by which time I had forgotten all about the earlier mention of compareTo(...). So, I think you'll have to agree, overvaluing the source code isn't my problem, but inefficiently reading the specifications is! p.s. Your link is broken. – DavidS Nov 13 '09 at 17:04
-1

That implementation allows a real number to be defined as < NaN, and -0.0 as < 0.0.

phoebus
  • 14,673
  • 2
  • 33
  • 35