9

The code below, when running a release configuration on .NET 4.5, produces the following output...

Without virtual: 0.333333333333333
With virtual:    0.333333343267441

(When running in debug both versions give 0.333333343267441 as the result.)

I can see that dividing a float by a short and returning it in a double is likely to produce garbage after a certain a point.

My question is: Can anyone explain why the results are different when the property providing the short in the denominator is virtual or non-virtual?

public class ProvideThreeVirtually
{
    public virtual short Three { get { return 3; } }
}

public class GetThreeVirtually
{
    public double OneThird(ProvideThreeVirtually provideThree)
    {
        return 1.0f / provideThree.Three;
    }
}

public class ProvideThree
{
    public short Three { get { return 3; } }
}

public class GetThree
{
    public double OneThird(ProvideThree provideThree)
    {
        return 1.0f / provideThree.Three;
    }
}

class Program
{
    static void Main()
    {
        var getThree = new GetThree();
        var result = getThree.OneThird(new ProvideThree());

        Console.WriteLine("Without virtual: {0}", result);

        var getThreeVirtually = new GetThreeVirtually();
        var resultV = getThreeVirtually.OneThird(new ProvideThreeVirtually());

        Console.WriteLine("With virtual:    {0}", resultV);
    }
}
David
  • 2,987
  • 1
  • 29
  • 35
IanR
  • 4,703
  • 3
  • 29
  • 27
  • 5
    Related [Float/double precision in debug/release modes](http://stackoverflow.com/questions/90751/float-double-precision-in-debug-release-modes), [Will the scope of floating point variables affect their values?](http://stackoverflow.com/questions/24321265/will-the-scope-of-floating-point-variables-affect-their-values). – CodeCaster Aug 19 '14 at 10:28
  • Out of interest do you get the same behaviour with a decimal? – Liath Aug 19 '14 at 10:28
  • 1
    just checked that in debug and release and it is returning always `0.333333333333333` for both of them – Mark Aug 19 '14 at 10:29
  • 3
    The only thing I could imagine is that with the `final` property, the compiler can inline the whole of `OneThird` to a constant. Whilst with the `virtual` form the compiler needs to allow for override of `Three`, so it can't inline the whole method. – Aron Aug 19 '14 at 10:39

3 Answers3

1

I believe James' conjecture is correct and this is a JIT optimization. The JIT is performing less precise division when it can which results in the difference. The following code sample duplicates your results when compiled in Release mode with x64 target and executed directly from a command prompt. I'm using Visual Studio 2008 with NET 3.5.

    public static void Main()
    {
        double result = 1.0f / new ProvideThree().Three;
        double resultVirtual = 1.0f / new ProvideVirtualThree().Three;
        double resultConstant = 1.0f / 3;
        short parsedThree = short.Parse("3");
        double resultParsed = 1.0f / parsedThree;

        Console.WriteLine("Result of 1.0f / ProvideThree = {0}", result);
        Console.WriteLine("Result of 1.0f / ProvideVirtualThree = {0}", resultVirtual);
        Console.WriteLine("Result of 1.0f / 3 = {0}", resultConstant);
        Console.WriteLine("Result of 1.0f / parsedThree = {0}", resultParsed);

        Console.ReadLine();
    }

    public class ProvideThree
    {
        public short Three
        {
            get { return 3; }
        }
    }

    public class ProvideVirtualThree
    {
        public virtual short Three
        {
            get { return 3; }
        }
    }

The results is as follows:

Result of 1.0f / ProvideThree = 0.333333333333333
Result of 1.0f / ProvideVirtualThree = 0.333333343267441
Result of 1.0f / 3 = 0.333333333333333
Result of 1.0f / parsedThree = 0.333333343267441

The IL is fairly straightforward:

.locals init ([0] float64 result,
           [1] float64 resultVirtual,
           [2] float64 resultConstant,
           [3] int16 parsedThree,
           [4] float64 resultParsed)
IL_0000:  ldc.r4     1.    // push 1 onto stack as 32-bit float    
IL_0005:  newobj     instance void Romeo.Program/ProvideThree::.ctor()
IL_000a:  call       instance int16 Romeo.Program/ProvideThree::get_Three()
IL_000f:  conv.r4          // convert result of method to 32-bit float 
IL_0010:  div          
IL_0011:  conv.r8          // convert result of division to 64-bit float (double)
IL_0012:  stloc.0
IL_0013:  ldc.r4     1.    // push 1 onto stack as 32-bit float
IL_0018:  newobj     instance void Romeo.Program/ProvideVirtualThree::.ctor()
IL_001d:  callvirt   instance int16 Romeo.Program/ProvideVirtualThree::get_Three()
IL_0022:  conv.r4          // convert result of method to 32-bit float 
IL_0023:  div
IL_0024:  conv.r8          // convert result of division to 64-bit float (double)
IL_0025:  stloc.1
IL_0026:  ldc.r8     0.33333333333333331    // constant folding
IL_002f:  stloc.2
IL_0030:  ldstr      "3"
IL_0035:  call       int16 [mscorlib]System.Int16::Parse(string)
IL_003a:  stloc.3          // store result of parse in parsedThree
IL_003b:  ldc.r4     1.
IL_0040:  ldloc.3      
IL_0041:  conv.r4          // convert result of parse to 32-bit float
IL_0042:  div
IL_0043:  conv.r8          // convert result of division to 64-bit float (double)
IL_0044:  stloc.s    resultParsed

The first two cases are nearly identical. The IL first pushes the 1 onto the stack as a 32-bit float, obtains 3 from one of the two methods, converts the 3 to a 32-bit float, performs the division, and then converts the result to a 64-bit float (double). The fact that (nearly) identical IL--the only difference is the callvirt vs. the call instruction--causes different results points squarely at the JIT.

In the third case the compiler has already performed the division into a constant. The div IL instruction isn't executed for this case.

In the final case I use a Parse operation to minimize the chance of the statement getting optimized (I'd say "prevent" but I don't know enough about what the compiler is doing). The result for this case matches the result from the virtual call. It appears that the JIT is either optimizing away the non-virtual method or it is performing division in a different manner.

Interestingly, if you eliminate the parsedThree variable and simply call the following for the fourth case, resultParsed = 1.0f / short.Parse("3"), the result is the same as the first case. Again, it appears the JIT is executing the division differently when it can.

Mike Cowan
  • 919
  • 5
  • 11
0

I've tested your code under .Net 4.5
I always get the same results when running in Visual Studio 2012:
0.333333333333333 when running in Rel/Dbg 32 bit
0.333333343267441 when running in Rel/Dbg 64 bit

I get your results when running the exe without launching it from the prompt witout visual studio and only if the code is:

  • run in 64bit mode (I'm running in Any CPU and the code was compiled without the Prefer 32-bit check)
  • in Release

The Optimize code option doesn't make any difference.

The only thing I can think about is that using virtual forces a later evaluation of the double type so the runtime does a 1/3 using floats and then promotes the result to double while when not using the virtual property it promotes the operands directly to double before doing the operation

Terenzio Berni
  • 485
  • 1
  • 4
  • 12
0

It might be a JITter optimization rather than a compiler optimization. There isn't much here for the compiler to optimize, but the JITter could easily inline the non-virtual version and end up with (double)1.0f/3 instead of (double)(1.0f/3). You can't ever rely on floating-point results being exactly what you expect anyway.

James
  • 3,551
  • 1
  • 28
  • 38