21

I am aware, that to compare two floating point values one needs to use some epsilon precision, as they are not exact. However, I wonder if there are edge cases, where I don't need that epsilon.

In particular, I would like to know if it is always safe to do something like this:

double foo(double x){
    if (x < 0.0) return 0.0;
    else return somethingelse(x); // somethingelse(x) != 0.0
}

int main(){
   int x = -3.0;
   if (foo(x) == 0.0) { 
     std::cout << "^- is this comparison ok?" << std::endl; 
   }
}

I know that there are better ways to write foo (e.g. returning a flag in addition), but I wonder if in general is it ok to assign 0.0 to a floating point variable and later compare it to 0.0.

Or more general, does the following comparison yield true always?

double x = 3.3;
double y = 3.3;
if (x == y) { std::cout << "is an epsilon required here?" << std::endl; }

When I tried it, it seems to work, but it might be that one should not rely on that.

Clifford
  • 88,407
  • 13
  • 85
  • 165
463035818_is_not_an_ai
  • 109,796
  • 11
  • 89
  • 185
  • 1
    Why do you believe that `0.0` is special in this regard? – Oliver Charlesworth Aug 23 '16 at 18:39
  • 6
    @OliverCharlesworth because I would be surprised if 0.0 cannot be represented exactly by a floating point – 463035818_is_not_an_ai Aug 23 '16 at 18:41
  • @tobi303 - Indeed it can. But the same is true for lots of numbers (in the region of 2^64 of them, in fact). – Oliver Charlesworth Aug 23 '16 at 18:41
  • @NathanOliver even if I dont do any math but only comparing for equality? – 463035818_is_not_an_ai Aug 23 '16 at 18:42
  • And YES, I did read your entire question, and YES, it is an exact duplicate of another question about comparing particularly to zero. – Ben Voigt Aug 23 '16 at 18:43
  • removed comment. Thought I saw something different. I believe this is safe but not 100% certain. – NathanOliver Aug 23 '16 at 18:43
  • @BenVoigt If it there is a duplicate, i wont complain. I just didnt find it. – 463035818_is_not_an_ai Aug 23 '16 at 18:44
  • 10
    Your premise is wrong. It's perfectly possible and sensible to compare floating point numbers. It's not the type that's inexact, it's only *operations* that are inexact. – Kerrek SB Aug 23 '16 at 18:44
  • 2
    Why would `3.3` not be `3.3`? As base-2 numbers, the exact value may be different, but surely different in the same way. These must compare equal, I would say. – Johannes Schaub - litb Aug 23 '16 at 18:46
  • @BenVoigt well I read the question and I think it is not an exact dupe. In that question `x` and `y` may come from some calculation (e.g. `x = 4.0 - 1.0; y = 3.0`) while `x=3.3;y=3.3; x==y;` yields the correct result – 463035818_is_not_an_ai Aug 23 '16 at 18:46
  • 1
    @tobi303: You may also be interested in this one: http://stackoverflow.com/q/21416022/103167 – Ben Voigt Aug 23 '16 at 18:46
  • @BenVoigt that is a much better dupe ;) – 463035818_is_not_an_ai Aug 23 '16 at 18:47
  • @AndyG I would not want to use a floating point implementation that could not exactly express 0.0. If there was math involded I could see an issue but if we have `double foo = 0.0; double bar = 0.0;` then `foo` and `bar` should be equal. – NathanOliver Aug 23 '16 at 18:47
  • @JohannesSchaub-litb [This answer](http://stackoverflow.com/a/21416119/4117728) states that comparing 0.0 to 0.0 is fine, while in general comparing x to x is not fine (well not exactly I allowed myself to generalize a bit) – 463035818_is_not_an_ai Aug 23 '16 at 18:49
  • For any non-NaN, comparing `x==x` will always be true, given `x` is some float variable. If you're computing `x` and `y` and comparing `x==y`, then no, obviously these may be different. Some people find scenarios where they are surprised to find them different; this is just the unaccounted-for error behavior of floating point. It's still perfectly deterministic and defined, and you could account for it properly if you wanted. Most people are lazy and just use an epsilon-comparison instead. This has nothing to do with identity comparisons, though. – GManNickG Aug 23 '16 at 18:52
  • Ah I see. I think I remember reading about it. For example with x87 doing floating point math in 80 bits instead of 64bits. I don't see why the compiler can't stick the "correct" 64bit floating point constant into the 80bit immediate operand (?) then, though.. Sorry, no x86 assembler knowledge here – Johannes Schaub - litb Aug 23 '16 at 18:53
  • @GManNickG the situation here is not that of a computation, but of `double d = 0.1; d == 0.1;` which apparently may be false, because the processor may compare with higher precision that that of type `double`. As for myself, I never compare non-zero doubles directly anyway, but I would like to see a coliru testcase that fails for such values. – Johannes Schaub - litb Aug 23 '16 at 18:55
  • @GManNickG so you claim, that [this answer](http://stackoverflow.com/a/21416119/4117728) isnt correct? Might be true, actually I didnt read the quite long discussion yet – 463035818_is_not_an_ai Aug 23 '16 at 18:56
  • Reopened. The "duplicate" question was more general, and did not address the specific question asked here. – Pete Becker Aug 23 '16 at 19:06
  • @tobi303: That answer is correct. Note that I said two variables being compared. `x == 0.1` is the computation part of my comment. – GManNickG Aug 23 '16 at 19:06
  • What if you said `x == (double)0.1`? They could have at least added a double suffix to say "yes, I really meant double!". – Johannes Schaub - litb Aug 23 '16 at 19:09
  • @JohannesSchaub-litb There can be two equally good representations for 3.3, and the implementation is under no obligation to be consistent. (For example, one if you load from memory and one if you load from an internal floating point unit.) – David Schwartz Aug 23 '16 at 19:15
  • @BenVoigt ups, it was not my intention to undo the dupe flag with my edit. Didnt know that after editing it wont be marked as dupe anymore.... – 463035818_is_not_an_ai Aug 23 '16 at 19:36
  • The specialness of the zero is in the fact that it can be represented without overflowing the available mantissa space in both the decimal and binary representation. So yes, zero is special, but so is 1 and 2, but not 0.1 or 0.2. Convert 0.1 or 0.2 to binary, and you've got an infinitely repeating binary number that the computer has to chop off somewhere, resulting in a different number when you look at it a second time and convert that truncated binary number to decimal. – Eric Leschinski Aug 23 '16 at 19:48
  • The comparison is sound. If you put 0.0 into a value, you'll get it back out. But what you're doing in the code above is using sentinel values to convey boolean information. This is a code smell. – Ryan Bemrose Aug 23 '16 at 20:00

5 Answers5

11

Yes, in this example it is perfectly fine to check for == 0.0. This is not because 0.0 is special in any way, but because you only assign a value and compare it afterwards. You could also set it to 3.3 and compare for == 3.3, this would be fine too. You're storing a bit pattern, and comparing for that exact same bit pattern, as long as the values are not promoted to another type for doing the comparison.

However, calculation results that would mathematically equal zero would not always equal 0.0.


This Q/A has evolved to also include cases where different parts of the program are compiled by different compilers. The question does not mention this, my answer applies only when the same compiler is used for all relevant parts.

C++ 11 Standard,
§5.10 Equality operators

6 If both operands are of arithmetic or enumeration type, the usual arithmetic conversions are performed on both operands; each of the operators shall yield true if the specified relationship is true and false if it is false.

The relationship is not defined further, so we have to use the common meaning of "equal".

§2.13.4 Floating literals

1 [...] If the scaled value is in the range of representable values for its type, the result is the scaled value if representable, else the larger or smaller representable value nearest the scaled value, chosen in an implementation-defined manner. [...]

The compiler has to choose between exactly two values when converting a literal, when the value is not representable. If the same value is chosen for the same literal consistently, you are safe to compare values such as 3.3, because == means "equal".

alain
  • 11,939
  • 2
  • 31
  • 51
  • What guarantee do you have that there is only one bit pattern that represents `0.0`? What if there are two equally good bit patterns that represent `0.0`, say one you get if you load from memory and another that you get by directly loading a zero into an internal floating point unit? – David Schwartz Aug 23 '16 at 19:16
  • 2
    You're wrong, 0.0 is special in a very unique way. Which is why you can do something with it that you otherwise can't do with floats. – Eric Leschinski Aug 23 '16 at 19:17
  • 3
    @EricLeschinski Do you have a reference for that claim? – David Schwartz Aug 23 '16 at 19:17
  • @DavidSchwartz I would have to look it up, but I would be very surprised if it was allowed to compare unequal. (Different patterns, yes) – alain Aug 23 '16 at 19:18
  • 2
    It's not really a question of whether 0.0 can be represented with only one bit pattern, but rather what the relevant floating point standard says about comparing floating point numbers. It's extremely rare to find a machine these days that doesn't follow IEEE-754 or some close derivative thereof, and that guarantees that comparing +0.0 with -0.0 will give "equal". Of course if you do something like `y = foo(x); z = 0.0; if (memcmp(&y, &z, sizeof(z)) == 0) { it_is_zero(); }`, then you have to make sure bit patterns match. – Mats Petersson Aug 23 '16 at 19:21
  • 2
    0.0 is special in that it doesn't have a repeating mantissa in either decimal or binary. There are other special float values that share these properties of terminating the mantissa before 17 units precision in both decimal and its converted binary. The danger of float comparisons is in the conversion from decimal to binary, not in the double equals part. Reference: https://www.youtube.com/watch?v=PZRI1IfStY0 The specialness of zero is in whether or not the mantissa terminates before the decimal to binary converter chops off the rest. In reading your answer a second time. You're not wrong. – Eric Leschinski Aug 23 '16 at 19:22
  • @EricLeschinski OK, zero is very special mathematically, of course. What I meant was that a comparison to zero is not different from a comparison to any other value. IMO, the whole "representable exactly" issues are not relevant here. – alain Aug 23 '16 at 19:27
  • There are other special float values, like for example the number 5. When you convert 5 to binary, chop off the overflowing mantissa, and convert that back to decimal, and chop off the extra mantissa, you get the exact same value 5 out as you put in. Values like 0.1 and 0.2 are not special in this way. – Eric Leschinski Aug 23 '16 at 19:32
  • "However, `3.1/3.1 - 1.0` would not equal `0.0`." -- Huh? Maybe it's not guaranteed to equal `0.0`, but it's certainly allowed to be. –  Aug 23 '16 at 20:00
  • @hvd Yes I just reworded that because it actually was on my compiler. Thanks :-) – alain Aug 23 '16 at 20:04
8

Yes, if you return 0.0 you can compare it to 0.0; 0 is representable exactly as a floating-point value. If you return 3.3 you have to be a much more careful, since 3.3 is not exactly representable, so a conversion from double to float, for example, will produce a different value.

Pete Becker
  • 74,985
  • 8
  • 76
  • 165
  • 2
    The issue is not whether zero is representable exactly but whether it is representable *uniquely*. – David Schwartz Aug 23 '16 at 19:18
  • 6
    @DavidSchwartz I don't think that's the issue. But the issue is whether the comparison of two zeros yield true. This may be the case even if both zeros are represented differently. – Johannes Schaub - litb Aug 23 '16 at 19:19
  • @DavidSchwartz - so your concern is that in `return 0.0` and `if (whatever == 0.0)` the compiler might use two different representations of `0.0` **and** that the comparison would fail because of that? That's truly paranoid. Which, of course, doesn't mean they **aren't** out to get you. – Pete Becker Aug 23 '16 at 19:39
  • 2
    It is not helpful at all that *zero* is exactly representable if the the value you are comparing it to is the *result of a calculation*. So while safe in the specific example shown, it is not generally safe. – Clifford Aug 23 '16 at 19:58
  • @Clifford - that's the point of the question. It **specifically** asks if comparing to `0.0` **in this context** is okay. So please don't bother with comments that in other contexts it might not be; that's already acknowledged in the question itself, and it's been done to death in the comments on the question. – Pete Becker Aug 23 '16 at 20:22
  • @PeteBecker I agree that the issue is not whether it's representable uniquely but whether it can have two different representations that do not compare equal. And I have learned from decades of painful experience that assuming something cannot fail because you cannot think of a way it can fail is asking for trouble. You're just not as imaginative as you think you are. – David Schwartz Aug 23 '16 at 20:39
  • @DavidSchwartz - so, again, your concern is that for the literal constant `0.0`, used in two different places, the compiler might generate two different representations that won't produce `true` when compared with `==`? Yes, my imagination utterly fails for anything that perverse. – Pete Becker Aug 23 '16 at 20:55
  • @PeteBecker Yes, that's my concern. Either you have a guarantee that it won't happen or you don't. And I've seen floating point implementations do things that many people would consider perverse, such as repeating exactly the same computation and getting values that [don't compare equal](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=323). Even `x/y == x/y` may be false. – David Schwartz Aug 23 '16 at 20:57
  • @DavidSchwartz - `x/y != x/y` has a somewhat sensible explanation. `0.0 != 0.0` does not. – Pete Becker Aug 23 '16 at 21:02
  • @PeteBecker It does *now*. Before it happened, it didn't. That's why everyone found it surprising. Again, either you have a guarantee or you don't. Otherwise, you just can't think of any way it could fail. You're not as imaginative as you think you are. – David Schwartz Aug 23 '16 at 21:08
  • @DavidSchwartz - I'm also worried about cosmic rays hitting my processor in the middle of a floating-point calculation. You're not as imaginative as you think you are. – Pete Becker Aug 23 '16 at 21:14
  • @PeteBecker You should be. But I'm not sure how that's relevant to a discussion specifically about what floating point behavior is guaranteed. – David Schwartz Aug 23 '16 at 21:18
  • 1
    @DavidSchwartz - it demonstrates that **no** floating-point behavior is guaranteed, nor can it possibly be. Since you have no absolutes, you (gasp!) have to make assumptions about reasonableness. – Pete Becker Aug 23 '16 at 21:26
  • 2
    @PeteBecker Sorry, I don't agree. You have standards, such as the C++ standard. And it's perfectly reasonable to ask what is and isn't guaranteed by that standard. – David Schwartz Aug 23 '16 at 21:32
  • @DavidSchwartz: I've got to side with Pete Becker here. The C++ Standard defines how integers work (like natural numbers), but not how floating point numbers work. – MSalters Aug 24 '16 at 07:41
  • @MSalters If there is no relevant standard that defines how floating point numbers work, then you should side with me. There's no reason you can rely on the comparison to turn out any particular way. – David Schwartz Aug 24 '16 at 08:50
  • @PeteBecker : Fair point perhaps though perhaps unnecessarily aggressive and over-sensitive. I made the point not as criticism of your answer but for information for the unwary and a caution that the answer is valid *only* in the context of the question body. If one considers just the question in the title - the answer is no it is not OK. – Clifford Aug 24 '16 at 18:42
2

correction: 0 as a floating point value is not unique, but IEEE 754 defines the comparison 0.0==-0.0 to be true (any zero for that matter).

So with 0.0 this works - for every other number it does not. The literal 3.3 in one compilation unit (e.g. a library) and another (e.g. your application) might differ. The standard only requires the compiler to use the same rounding it would use at runtime - but different compilers / compiler settings might use different rounding.

It will work most of the time (for 0), but is very bad practice.

As long as you are using the same compiler with the same settings (e.g. one compilation unit) it will work because the literal 0.0 or 0.0f will translate to the same bit pattern every time. The representation of zero is not unique though. So if foo is declared in a library and your call to it in some application the same function might fail.

You can rescue this very case by using std::fpclassify to check whether the returned value represents a zero. For every finite (non-zero) value you will have to use an epsilon-comparison though unless you stay within one compilation unit and perform no operations on the values.

example
  • 3,349
  • 1
  • 20
  • 29
  • Hmm learned something new here =) I always thought the floating point comparison is basically a byte-wise comparison, but apparently I was wrong. – example Aug 23 '16 at 20:31
1

As written in both cases you are using identical constants in the same file fed to the same compiler. The string to float conversion the compiler uses should return the same bit pattern so these should not only be equal as in a plus or minus cases for zero thing but equal bit by bit.

Were you to have a constant which uses the operating systems C library to generate the bit pattern then have a string to f or something that can possibly use a different C library if the binary is transported to another computer than the one compiled on. You might have a problem.

Certainly if you compute 3.3 for one of the terms, runtime, and have the other 3.3 computed compile time again you can and will get failures on the equal comparisons. Some constants obviously are more likely to work than others.

Of course as written your 3.3 comparison is dead code and the compiler just removes it if optimizations are enabled.

You didnt specify the floating point format nor standard if any for that format you were interested in. Some formats have the +/- zero problem, some dont for example.

old_timer
  • 69,149
  • 8
  • 89
  • 168
1

It is a common misconception that floating point values are "not exact". In fact each of them is perfectly exact (except, may be, some special cases as -0.0 or Inf) and equal to s·2e – (p – 1), where s, e, and p are significand, exponent, and precision correspondingly, each of them integer. E.g. in IEEE 754-2008 binary32 format (aka float32) p = 24 and 1 is represented as ‭0x‭800000‬‬·20 – 23. There are two things that are really not exact when you deal with floating point values:

  1. Representation of a real value using a FP one. Obviously, not all real numbers can be represented using a given FP format, so they have to be somehow rounded. There are several rounding modes, but the most commonly used is the "Round to nearest, ties to even". If you always use the same rounding mode, which is almost certainly the case, the same real value is always represented with the same FP one. So you can be sure that if two real values are equal, their FP counterparts are exactly equal too (but not the reverse, obviously).
  2. Operations with FP numbers are (mostly) inexact. So if you have some real-value function φ(ξ) implemented in the computer as a function of a FP argument f(x), and you want to compare its result with some "true" value y, you need to use some ε in comparison, because it is very hard (sometimes even impossible) to white a function giving exactly y. And the value of ε strongly depends on the nature of the FP operations involved, so in each particular case there may be different optimal value.

For more details see D. Goldberg. What Every Computer Scientist Should Know About Floating-Point Arithmetic, and J.-M. Muller et al. Handbook of Floating-Point Arithmetic. Both texts you can find in the Internet.

aparpara
  • 2,171
  • 8
  • 23