-1

Look at this situation:

var number1 = Math.Floor(1.9999999999999998d); // the result is 1
var number2 = Math.Floor(1.9999999999999999d); // the result is 2

In a both cases, the result should be 1. I know it's a very unlikely scenario, but possible to occur. The same ocurr with Math.Truncate method and (int) cast.

Why does it happen?

2 Answers2

0

There is no exact double representation for a lot of numbers.

The double number the nearest from 1.9999999999999999 is 2, so the compiler rounds it up.

Try to print it before using your Math.Floor function !

However, the nearest from 1.9999999999999998 is still 1.something, so Floor gives out 1 .

Again, it would be enough to print the number before the function Floorto see that they were actually not anymore the one entered in the code.

EDIT : To print out the number with most precision :

        double a1 = 1.9999999999999998;
        Console.WriteLine(a1.ToString("G17"));
        // output : 1.9999999999999998

        double a2 = 1.9999999999999999;
        Console.WriteLine(a2.ToString("G17"));
        // output : 2

Since double precision is not always precise to 17 significative digits (including the first one before the decimal point), default ToString() will round it up to 16 significant digits, thus, in this case, rounding it up to 2 as well, but only at runtime, not at compile time.

Pac0
  • 21,465
  • 8
  • 65
  • 74
  • Printing does not help, as it won't give you the exact representation, e.g. for `Console.WriteLine(1.999999999999999)`, I get `2`, yet flooring it yields `1`. `ToString("r")` might help, though, and shows that the number's internal representation is actually `1.9999999999999989`. – Joey Sep 18 '17 at 13:00
-1

If you put literals values into another variables, then you see it why:

var a1 = 1.9999999999999998d; // a1 = 1.9999999999999998d
var number1 = Math.Floor(a1);
Console.WriteLine(number1); // 1

var a2 = 1.9999999999999999d; // a2 = 2
var number2 = Math.Floor(a2);
Console.WriteLine(number2); // 2

As for why - this has to be something to do with precision of double and decision of compiler as to what value to use for a given literal.

Sinatr
  • 20,892
  • 15
  • 90
  • 319
  • At compile time, `1.9999999999999999d` is converted to the `double` with the exact value `2` (`1.9999999999999999d == 2.0d ` is `true`). After that, of course, the results are as expected. – Jeroen Mostert Sep 18 '17 at 12:54
  • @JeroenMostert, exactly, but why and how compiler does that - no idea tbh. Why it didn't choose previous value? – Sinatr Sep 18 '17 at 12:55
  • What "previous value"? `2` is the nearest representible value, rounding upwards. Did you expect it to round downwards to `1.9999999999999998` instead? That would make little sense. – Jeroen Mostert Sep 18 '17 at 12:58
  • @JeroenMostert, *"rounding upwards"* - well, why upwards? Probably there are *reasons* behind that decision, but I am not able to answer on this question. – Sinatr Sep 18 '17 at 12:59
  • IEEE 754 and/or the C# language specification, something something. I can't find the relevant part of the specs that quickly, but the rules are out there... somewhere. (Not that the compiler necessarily follows them; in particular, the rules for `double` evaluation at compile time and at runtime are notoriously different, thanks to the extended 80-bit precision of the x87.) – Jeroen Mostert Sep 18 '17 at 13:05
  • It rounds using IEEE round-to-nearest which in picking between two equally precise possible values to round to it will pick that which has a 0 in the least significant bit (the other possibility will always have 1 there being 1 higher or lower in the mantissa). It's analogous to round-to-even ("bankers' rounding") in dealing with decimal numbers. – Jon Hanna Sep 18 '17 at 13:33