2

I have the code

        int twelve = 12;
        int five = 5;
        float y = (float)twelve / 5;


        var e = (float)((float)twelve / (float)five) == y; // true
        var f = ((float)twelve / (float)five) == y; // false
        var g = ((float)twelve / (float)five) == 2.4; // true
        var h = ((float)twelve / (float)five) == 2.4F; // false
        var i = ((float)12 / (float)5) == 2.4F; // true

and I cannot understand why f and h are false.

Can somebody explain what exactly is happening here?

Same Java code seems more predictable

    int twelve = 12;
    int five = 5;

    float y = (float)twelve / 5;

    boolean e = (float)((float)twelve / (float)five) == y; // true
    boolean f = ((float)twelve / (float)five) == y; // true
    boolean g = ((float)twelve / (float)five) == 2.4; // false
    boolean h = ((float)twelve / (float)five) == 2.4F; // true
    boolean i = ((float)12 / (float)5) == 2.4F; // true
Liam Kernighan
  • 2,335
  • 1
  • 21
  • 24
  • 1
    If `((float)twelve / (float)five)` is first substituted into a separate variable, then the outcomes for `e`, `f`, `g`, `h`, `i` will be more predictable. – Dandré Feb 26 '19 at 20:29
  • @elgonzo https://dotnetfiddle.net/hU3Blc – Liam Kernighan Feb 26 '19 at 20:30
  • @elgonzo C# is allowed to do floating point math at a higher precision than the one provided. In this case, it's doing the operation as a double with it's higher precision, then comparing that to a float in the cases where it fails, which is converted from the lower precision. – Jonathon Chase Feb 26 '19 at 20:31
  • 3
    @elgonzo: See the linked duplicate, and in particular Eric Lippert's answer: "There is no guarantee that doing that calculation *twice in the same program* will produce the same results." – Daniel Pryden Feb 26 '19 at 20:31
  • @elgonzo interesting. My 2 console applications with .Net Framework 4.7.1 and .Net Core 2.2 behave themselves the same as the fiddle. – Liam Kernighan Feb 26 '19 at 20:34
  • 2
    @elgonzo I get false/false for f/h in debug mode and true/false in release mode. There isn't any guarantee with floating points, especially when the precision might change in the middle of an expression and then be cast down at the end. – Jonathon Chase Feb 26 '19 at 20:34
  • 2
    Quoting again from the accepted answer on the linked duplicate: "The C# compiler, the jitter and the runtime all have broad latitude to give you *more accurate results* than are required by the specification, at any time, at a whim -- they are not required to choose to do so consistently and in fact they do not." – Daniel Pryden Feb 26 '19 at 20:35
  • By the way, the target of my test project was .NET 4.6.1, running in Debug mode, in x64 mode. I'll give it a shot in AnyCPU/x86 mode... –  Feb 26 '19 at 20:37
  • 3
    `that is utter and complete nonsense... ` @elgonzo you may find http://blog.paranoidcoding.com/2014/12/22/redundant-cast.html and https://stackoverflow.com/questions/47189863/modulus-gives-wrong-outcome and https://www.ecma-international.org/publications/files/ECMA-ST/ECMA-335.pdf (12.1.3) of interest. – mjwills Feb 26 '19 at 20:38
  • 1
    Oi, running my same project in AnyCPU/x32 will result in `f` and `h` being false. This is quite surprising to me, i admit. Whoa, i really didn't expect floating point math to be that unreliable. –  Feb 26 '19 at 20:41
  • 3
    @elgonzo: To be clear, it's not *floating point math* that is at fault here, but rather the C# compiler and .NET JITter. The floating point hardware performs exactly the same all the time, but C# may choose to leave intermediate values in FPU registers whenever it feels like it, and on x86, an FPU register has *more precision* than a `double`. In Java, you can get completely reproducible results (at the cost of some performance) by using the `strictfp` keyword; C# has no comparable feature. – Daniel Pryden Feb 26 '19 at 20:44
  • @DanielPryden, yes, it was understood that it is not a HW thing but an effect caused by the compiler, JIT and/or CLR... ;-) –  Feb 26 '19 at 20:46
  • 1
    see https://stackoverflow.com/questions/753948/why-is-floating-point-arithmetic-in-c-sharp-imprecise – gethomast Feb 26 '19 at 21:07
  • 1
    @gethomast, oh, if it were only about the precision of floating point numbers. Please read the comment thread here, it is both enlightening and embarrassing -- the latter if you read my comments ;-P –  Feb 26 '19 at 21:14

0 Answers0