1

Here is the code:

#define MIN_RESOLUTION          ( 0.0000001)
double yMax, yMin;

// calculate yMax, yMin
//....

if( fabs( yMax - yMin) > MIN_RESOLUTION)
{ 
    scaleMin = yMin;
    scaleMax = yMax;
}
else
{ 
    // following two lines are executing
    scaleMin = yMin - 1.0;
    scaleMax = yMax + 1.0;

    // could still be the same...
    if(scaleMin == scaleMax)
    {
        if(scaleMax > 0) // decrease min
            scaleMin = 0;
        else             // increase max
            scaleMax = 0;
    }
}

assert(scaleMin != scaleMax);

In my example yMin == yMax == 1.6170737412027187 e17. Because of that, the lines 14 and 15 are allways executed:

  scaleMin = yMin - 1.0;
  scaleMax = yMax + 1.0;

I understand that the precision of double is limited (15 digits) and that the operations of adding / subtracting 1 do not make much sense for such big numbers. But I would expect that the “==” operation always gives ‘true’ then as result. And this is not the case.

Sometimes it happens that the two doubles in the line 18 are not equal (scaleMin == scaleMax) and several lines later they are equal (i.e. assert in the line 27 is triggered). The numbers are allways the same (1.6170737412027187 e17)

I’ve also tried to replace line (27) with this:

if (scaleMin == scaleMax)
{
    assert(false);
}

The behavior is still the same. Any ideas?

Joe
  • 46,419
  • 33
  • 155
  • 245
  • "Because of that, the lines 14 and 15 are allways executed" - have you *actually* verified this? – Karoly Horvath Dec 16 '13 at 10:47
  • Your expectations about the workings of floating point operations are wrong. Once you change the double you can't rely on it being equal with something else using `operator==`. You have to use `fabs`, like you did in line 7. – Dialecticus Dec 16 '13 at 10:50
  • 4
    Please provide a complete program to demonstrate what you claim. Because what you claim, on the face of it, cannot be so. If `==` evaluates to `true`, then `!=` evaluates to `false`. And vice versa. – David Heffernan Dec 16 '13 at 10:50
  • Floating point arithmetic on modern CPUs follows Heisenberg's law. Looking at the value changes it. Seriously though: If you look at the values (print them) they'll be exactly identical, since that will store the 80/96 bit values in memory with 64 bits of precision. Which, incidentially, is just few enough bits so the values are exactly the same. – Damon Dec 16 '13 at 11:04
  • 2
    @Damon: floating-point on modern processors is deterministic. Floating-point in some implementations of higher languages is generally also deterministic when all variables are controlled (identical source code et cetera) but may appear non-deterministic in the face of what unsuspecting programmers might consider irrelevant changes (minor alterations to code not explicitly modifying objects), largely due to loose specification of the binding of the language to floating-point operations. – Eric Postpischil Dec 16 '13 at 11:30
  • @EricPostpischil: You do not seem to have understood my comment correctly. – Damon Dec 16 '13 at 13:31
  • possible duplicate of [Most effective way for float and double comparison](http://stackoverflow.com/questions/17333/most-effective-way-for-float-and-double-comparison) – Klas Lindbäck Dec 16 '13 at 14:18
  • @Damon: Floating-point on modern processors does not follow Heisenberg’s law. Looking at a value does not change it. – Eric Postpischil Dec 16 '13 at 14:33
  • @EricPostpischil: Oh FFS... you are really trying hard to understand it wrong. Printing the value writes 64 out of 80 or 96 bit (or whichever number of bits the FPU has) to a memory location on the stack. The same happens when a scope ends or when for some other reasons the value is written to memory somewhere. The value you see is _certainly_ changed while you look at it. Which is the OP's entire problem. The values are "obviously and demonstrably" the same when they really aren't. – Damon Dec 16 '13 at 14:37
  • 1
    @Damon: The mechanisms you refer to for printing, passing parameters, and writing data from registers to stack are features of a C++ implementation, not features of a processor. The processor behavior is deterministic, and the values do not change when observed. The bits in a register retain their values even when copied. That error in terminology aside (which is a significant error since it conveys the wrong idea to people learning about this), C++ implementations are generally deterministic, as I stated. Whether an implementation chooses to keep a value with extended precision or to round… – Eric Postpischil Dec 16 '13 at 15:23
  • … it to nominal precision is happenstance, so it is, for practical purposes, essentially random. But it is caused by any change to the code, not just printing it. The actual mechanism of printing will round the argument to nominal precision of the parameter, but that does not require the C++ implementation to change the extended precision value it has. It can both keep the extended precision and pass the reduced precision to the print routine. Yes, it can be difficult to observe the actual values in use, but that does not mean that observation is the cause of changing the values. – Eric Postpischil Dec 16 '13 at 15:28

2 Answers2

2

Bear in mind that the representation of a floating point value when held in a register is not necessarily the same as that in memory.

For instance, on x86, by default (though you can change this), FPU registers perform calculations on values with 80 bits of mantissa, which would get truncated to 53 bits when the values are stored in memory as IEEE doubles.

So it is entirely possible, in optimised code, for values to appear to change slightly between lines of C, as they get flushed to memory, due to potentially unrelated code needing the registers or for other reasons not under your control.

Because of this, you cannot in general rely on == for purposes such as your sample code.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
moonshadow
  • 86,889
  • 7
  • 82
  • 122
  • 1
    The C++ standard requires floating-point values to be converted to their nominal types when an assignment is performed. Extra precision must be eliminated. Because of this, if `scaleMin` and `scaleMax` are equal after the assignments, they must be equal later, if there have been no intervening operations on them. It is likely the problem is the OP is mistaken about what they report. Alternatively, the compiler may have been placed in a non-compliant mode with an unsafe optimization switch. – Eric Postpischil Dec 16 '13 at 11:25
  • Yes, you can change the default computation size, but you **really** don't want to. Java initially required all computations with `float` to use 32-bit floating-point math, and all computations with `double` to use 64-bit floating-point math, and the result was very slow on Intel processors. It was bad enough that, in response to the protests of people who actually knew something about floating-point math, they changed that rule, and allowed the higher precision computations. – Pete Becker Dec 16 '13 at 11:55
  • @PeteBecker: It is not clear what your point is. Neither this answer nor my comment discuss changing the precisions used. They discuss the behavior of the code as it is. So there is no apparent relevance of your comment to these. – Eric Postpischil Dec 16 '13 at 12:27
  • @EricPostpischil - I was just expanding on your parenthetical comment "you can change this". – Pete Becker Dec 16 '13 at 12:47
  • @moonshadow: That was the explanation that I was looking for! Thank you very much! Now I can understand what is happening (by the way, it is the very old code, part of our tool that is pretty big, and now after years we have a problem with it…) – user3106968 Dec 16 '13 at 13:40
  • @user3106968 I don't think you do understand it. Eric's first comment is accurate. – David Heffernan Dec 16 '13 at 20:07
-2

You cannot use exact equality check for floats and doubles.

Even in the cases when you do something like:

float a = calculate_sin_in_0();
float b = calculate_cos_in_pi_2();

the values of a and b might be slightly different because of the way machine work

To perform such a check correctly, you have to take some small value as insignificant:

const float eps = 0.00001;
if (abs(a - b) < eps) {  // consider a == b
    // ...
}
podshumok
  • 1,649
  • 16
  • 20
  • I agree. You should create a function to do this (for example Qt has qFuzzyCompare). – Jepessen Dec 16 '13 at 10:59
  • 2
    There is no way your specific example would not pass a simple `==` test. To say otherwise would mean that floating point arithmetic is not deterministic. – Jim Buck Dec 16 '13 at 11:03
  • @JimBuck Well, if `c` was NaN in the first place, but yeah. – BoBTFish Dec 16 '13 at 11:22
  • If that were the case, then all bets would also be off even for the `abs`/`eps` kind of test. :) – Jim Buck Dec 16 '13 at 11:59
  • 4
    You **can** and **should** use `==` with floating-point values when appropriate. "Nearly equal" is an advanced technique that will lead beginners to grief. For example, `a` nearly equals `b` and `b` nearly equals `c` does **not** imply that `a` nearly equals `c`. – Pete Becker Dec 16 '13 at 12:00
  • @JimBuck You right -- the same code on exactly the same data will produce exactly the same result. I'll edit the answer – podshumok Dec 16 '13 at 16:36
  • @Petebecker I think beginners should learn this as a first thing about floating point arithmetics – podshumok Dec 16 '13 at 16:38
  • @PeteBecker - your comment should be highlighted with unicorn images. Too many programmers blindly do the epsilon style of comparing without knowing why, and then just repeat the advice everywhere, again without knowing why, when the most correct thing to say is that equals is totally the right answer in many cases. – Jim Buck Dec 16 '13 at 19:27