6

I have this code;

 static int test = 100;
        static int Test
        {
            get
            {
                return (int)(test * 0.01f);
            }
        }

output is : 0

But this code returns different

static int test = 100;
    static int Test
    {
        get
        {
            var y = (test * 0.01f);
            return (int)y;
        }
    }

output is : 1

Also I have this code

  static int test = 100;
    static int Test
    {
        get
        {
            return (int)(100 * 0.01f);
        }
    }

output is : 1

I look at IL output and I dont understand why C# doing this mathematical operation at compile time and output different? enter image description here

What is difference of this two code? Why I decided to use variable result is changing?

Cevizli
  • 153
  • 1
  • 6
  • How do you check values? – Hamid Pourjam May 04 '16 at 09:22
  • Possible duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Liam May 04 '16 at 09:44
  • 3
    I was looking around an SO, but I can't find the exact answer. This answer however, may clear things up a bit: http://stackoverflow.com/a/15117741/2594485 – Dennis_E May 04 '16 at 09:45
  • My question about compile time operations,variables and casting.Why compiler doing this operation at compile time.Why if I choose to use variable,result is changing? – Cevizli May 04 '16 at 09:46
  • 2
    Storing a value in a variable is done at runtime. It could be converted back and forth. And since floating-point arithmetic is inaccurate, apparently it can result in a value that is slightly less than 1. If the compiler sees `100 * 0.01`, it will do the calculation at compile-time. Why? Because. – Dennis_E May 04 '16 at 09:52
  • 2
    It's not "like" floating point - it is about floating point. It's about whether a calculated value is being stored with 80-bits of precision (as required when the value is *stored* in a variable) vs more precision, and is very well handled in Eric Lippert's answer, as linked to by Dennis. – Damien_The_Unbeliever May 04 '16 at 09:52
  • Let me take a look.Thanks for reply. – Cevizli May 04 '16 at 09:54
  • 1
    It is permitted that the results be different *at any time for any reason*. Is this terrible? Yes. But which way a thing rounds depends on tiny, tiny differences in the value, and you are generating tiny differences. – Eric Lippert May 04 '16 at 13:24

1 Answers1

2

Because the compiler tricks you. The compiler is smart enough to do some math already so it doesn't need to do that on run-time, which would be pointless. The expression 100 * .01f is calculated in the compiler, without the lack of precision on the float, which breaks you up on run-time.

To prove this, try to make the static test a const. You will see the compiler is able to do the math for you on compile time then too. It has nothing with writing to a variable first, as in your sample. Run-time vs. compile-time is.

Community
  • 1
  • 1
Patrick Hofman
  • 153,850
  • 22
  • 249
  • 325
  • Ok I understand.But this is not my question.I get it.Compiler solving constant matematical operations at compile time.This is really need to be.But if I use variables for this operation "result" is changing? I think this is a wrong? Why result is changing.Variables just a placeholder am I right? – Cevizli May 04 '16 at 12:01
  • No. There is a difference between compile time resolution of constant values and runtime representation. – Patrick Hofman May 04 '16 at 13:02