1

I have a double and an int variable. Their product is a whole number. I wanted to check that, so I followed this method and was really puzzled ...

When I do this, everything acts like it's supposed to:

#include <cmath>

double a = 0.1;
int b = 10;
double product = a * (double) b;

if(std::floor(product) == product){
    // this case is true
else{
    // this case is false
}

But, strangely, this doesn't work:

#include <cmath>

double a = 0.1;
int b = 10;

if(std::floor(a * (double) b) == (a * (double) b)){
    // this case is false
else{
    // this case is true
}

Can anyone explain this to me?


EDIT:

To clarify, that it's not just a problem of fixed precision floating point calculation:

#include <cmath>

double a = 0.1;
int b = 10;

if((a * (double) b) == (a * (double) b)){
    // this case is true
else{
    // this case is false
}

So the product of a and b is (although not precisely equal to 1.0) of course equal to itself, but calling std::floor() messes things up.

Community
  • 1
  • 1
trophallaxis
  • 115
  • 1
  • 8
  • 3
    `0.1` is not represented precisely by `double`. See: http://stackoverflow.com/questions/1089018/why-cant-decimal-numbers-be-represented-exactly-in-binary – alcedine May 08 '15 at 09:00
  • @tropallaxis: It seems to work on my machine (Ubuntu 14.04 x86_64, clang 3.4) – Levi May 08 '15 at 09:02
  • 2
    @Levi Since the values are known at compile time, compilers may optimize the imprecision away. This is not reliable. – Wintermute May 08 '15 at 09:05
  • works on my windows 8.1 machine. – Christian Abella May 08 '15 at 09:09
  • 2
    [this](http://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/) might help... walks you through why 0.1000000000000000055511151231257827021181583404541015625 is the closest `double` to 0.1 - multiplying that by 10 leaves a little noise in the least significant bit or two. Of couse, the compiler's allowed to do the calculation is floating point registers too (which are 80 bit on x86 CPUs), and if it does so then multiplies by 10 then rounds to double it may be exactly 1.0, so it's impractical to predict. – Tony Delroy May 08 '15 at 09:11
  • Thanks alcedine and Tony D for the Links, they helped understanding one of the underlying issues. There seems to be another one though (see Petr's answer). – trophallaxis May 11 '15 at 09:08

3 Answers3

3

This is the nature of fixed-precision math.

In fixed-precision binary, .1 has no exact representation. In fixed-preciseion decimal, 1/3 has no exact representation.

So it's precisely the same reason 3 * (1/3) won't equal 1 if you use fixed-precision decimal. There is no fixed-precision decimal number that equals 1 when multiplied by 3.

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
3

The value 0.1 cannot be represented exactly by any (binary based) floating point representation. Try to express the fraction 1/10 in base 2 to see why - the result is an infinitely recurring fraction similar to what occurs when computing 1/3 in decimal.

The result is that the actual value stored is an approximation equal to (say) 0.1 + delta where delta is a small value which is either positive or negative. Even if we assume that no further rounding error is introduced when computing 10*0.1, the result is not quite equal to 1. Further rounding errors introduced when doing the multiplication may cancel some of those effects out - so sometimes such examples will seem to work, sometimes they won't, and the results vary between compilers (or, more accurately, the floating point representations supported by those compilers).

Some compilers are smart enough to detect such cases (where the values a and bare known to the compiler, rather than being input at run time) and others do calculations using a high-precision library (i.e. they don't work internally with floating point) which can cause an illusion of avoiding rounding error. However, that can't be relied on.

Peter
  • 35,646
  • 4
  • 32
  • 74
3

This is due to rounding errors.

First of all, 0.1 can not be stored in double exactly, so your product is most probably not exactly 1.

Secondly, and, I think, more importantly in your case, there is even a more subtle reason. When you compare the results of some computations directly instead of storing them into double variables and comparing them (if (cos(x) == cos(y)) instead of a=cos(x); b=cos(y); if (a==b)...), you may find the operator== returning false even if x==y. The reason is well explained here: https://isocpp.org/wiki/faq/newbie#floating-point-arith2 :

Said another way, intermediate calculations are often more precise (have more bits) than when those same values get stored into RAM. <...> Suppose your code computes cos(x), then truncates that result and stores it into a temporary variable, say tmp. It might then compute cos(y), and (drum roll please) compare the untruncated result of cos(y) with tmp, that is, with the truncated result of cos(x)

The same effect might take place with multiplication, so your first code will work, but not the second.

Petr
  • 9,812
  • 1
  • 28
  • 52
  • Thanks Petr. I accepted this answer because it addresses not only the fixed precision floating point calculation issue, but also the storing/truncating problem. – trophallaxis May 11 '15 at 09:19