7

This surprised me - the same arithmetic gives different results depending on how its executed:

> 0.1f+0.2f==0.3f
False

> var z = 0.3f;
> 0.1f+0.2f==z
True

> 0.1f+0.2f==(dynamic)0.3f
True

(Tested in Linqpad)

What's going on?


Edit: I understand why floating point arithmetic is imprecise, but not why it would be inconsistent.

The venerable C reliably confirms that 0.1 + 0.2 == 0.3 holds for single-precision floats, but not double-precision floating points.

Colonel Panic
  • 132,665
  • 89
  • 401
  • 465
  • 2
    -1: There are sooo many duplicates of this. Just have a look at the side bar and the links under the heading **Related** – Daniel Hilgarth Nov 08 '12 at 14:20
  • 7
    @DanielHilgarth seems to me this isn't the usual question about floating point, but rather about a difference between the calculations performed by the compiler and the calculations performed by the runtime – AakashM Nov 08 '12 at 14:36
  • 3
    @DanielHilgarth: You cannot simply write off everything with some floating-point involvement as floating-point imprecision. Some floating-point questions are opportunities to explain how floating point operates. Take note that this question shows that both `0.1f+0.2f` and `x` have type `Single`. So there is a question of why the assignment of `0.1f+0.2f` appears not to preserve the value, even though the type is not changed. This is a question of C# semantics. Can you show an exact duplicate? If you can identify an exact duplicate, then propose closing this problem as an exact duplicate. – Eric Postpischil Nov 08 '12 at 14:38
  • This question is answered by [this Eric Lippert answer](http://stackoverflow.com/a/2494724/71059), to a similar-but-not-quite-duplicate question. – AakashM Nov 08 '12 at 14:45

1 Answers1

7

I strongly suspect you may find that you get different results running this code with and without the debugger, and in release configuration vs in debug configuration.

In the first version, you're comparing two expressions. The C# language allows those expressions to be evaluated in higher precision arithmetic than the source types.

In the second version, you're assigning the addition result to a local variable. In some scenarios, that will force the result to be truncated down to 32 bits - leading to a different result. In other scenarios, the CLR or C# compiler will realize that it can optimize away the local variable.

From section 4.1.6 of the C# 4 spec:

Floating point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating point type with greater range and precision than the double type, and implicitly perform all floating point operations with the higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating point operations with less precision. Rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating point operations. Other than delivering more precise results, this rarely has any measurable effects.

EDIT: I haven't tried compiling this, but in the comments, Chris says the first form isn't being evaluated at execution time at all. The above can still apply (I've tweaked my wording slightly) - it's just shifted the evaluation time of a constant from execution time to compile-time. So long as it behaves the same way as a valid evaluation, that seems okay to me - so the compiler's own constant expression evaluation can use higher-precision arithmetic too.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • Intersting answer but I'm not sure it is right. When I look at the IL that LinqPad generates it seems to have optimised it out completely such that `Console.WriteLine((0.1f+0.2f)==0.3f);` becomes `ldc.i4.0` followed by the call to `Console.WriteLine`. This would suggest to me that it is in fact compiler optimisation that is causing this and nothing to do with the instructions sent to the hardware. – Chris Nov 08 '12 at 14:54
  • @Chris That particular expression is based entirely off of compile time constants, so it can (and apparently is) being evaluated at compile time, rather than runtime. – Servy Nov 08 '12 at 15:18
  • @Servy: Indeed. That makes perfect sense. What doesn't make sense is why it evaluates to false... – Chris Nov 08 '12 at 15:26
  • @Chris: Well it can be the same reasoning, just applied earlier on - in compilation rather than at execution time. Will edit my answer. – Jon Skeet Nov 08 '12 at 15:37
  • @JonSkeet: Yeah, I came to that conclusion later but clarification in the answer can't hurt. :) I think I just originally assumed that compilers were infallible. :) – Chris Nov 08 '12 at 15:38