4

I am seeing differences in the result when formatting numeric values using ToString("F2").

0.125m.ToString("F2", CultureInfo.InvariantCulture); // 0.13
0.125.ToString("F2", CultureInfo.InvariantCulture); // 0.12

Why are these values rounded differently?

.NET Fiddle version of the code here.

Cleptus
  • 3,446
  • 4
  • 28
  • 34
Sean Kearon
  • 10,987
  • 13
  • 77
  • 93
  • [This .NET Fiddle](https://dotnetfiddle.net/DB63GG) prints `0.13` in both cases for .NET Framework. – Uwe Keim Aug 07 '20 at 11:12
  • 3
    @UweKeim for 4.7.2 not for [Core 3.1](https://dotnetfiddle.net/IYi3Ui) – Selvin Aug 07 '20 at 11:12
  • Okay, when using .NET Core 3.1, [the .NET Fiddle](https://dotnetfiddle.net/nF53LL) has the same behaviour (`0.13` and `0.12`). – Uwe Keim Aug 07 '20 at 11:13
  • Yes the result is same same for .Net Framework 4.7.2 as well – Sh.Imran Aug 07 '20 at 11:14
  • 2
    seems like Framework is using MidpointAwayFromZero and in core is not ... because it's [fine with 0,135](https://dotnetfiddle.net/xAzmRP) – Selvin Aug 07 '20 at 11:15
  • 5
    I think the moral is: *always do your own rounding*, to be sure of what will happen. – Peter B Aug 07 '20 at 11:15
  • Indeed, better [control](https://stackoverflow.com/questions/977796/why-does-math-round2-5-return-2-instead-of-3) rounding.. – TaW Aug 07 '20 at 11:16
  • Interesting, calling `Math.Round` manually and specifying 2 decimal places results in both being rounded the same: https://dotnetfiddle.net/SyJL4p – MindSwipe Aug 07 '20 at 11:20
  • I'm definitely following @PeterB on this and rolling my own rounding function. I still can't get my head around why `decimal` and `double` would behave differently here though! – Sean Kearon Aug 07 '20 at 11:26
  • I'm pretty sure this is due to how C# stores doubles/ decimals in memory that's leading to a floating point rounding error (same reason why 0.1 + 0.2 doesn't *exactly* 0.3 when calculating it in binary). Just haven't gotten around to testing it – MindSwipe Aug 07 '20 at 11:34
  • @TimSchmelter - I'm reading that as explaining which rounding choice is made, but not why `double` and `decimal` would round differently. Am I missing something? Edit: haha - I'm out of sync. So, okay, it's the difference in precision that make the difference then. – Sean Kearon Aug 07 '20 at 11:34
  • I guess you have to look at binary presentation of `1.25`, it's probably *special*. Trying with `2.25` produces [same results](https://dotnetfiddle.net/tOUhQe). – Sinatr Aug 07 '20 at 11:36
  • @TimSchmelter - looks like that's right - `0.135` and both round to `0.14`. Ahhh, computers and numbers... :( – Sean Kearon Aug 07 '20 at 11:39
  • @TimSchmelter - if you'd like to add an answer sometime, I'll tick the box! – Sean Kearon Aug 07 '20 at 13:20

1 Answers1

4

It's documented here:

When precision specifier controls the number of fractional digits in the result string, the result string reflects a number that is rounded to a representable result nearest to the infinitely precise result. If there are two equally near representable results:

  • On the .NET Framework and .NET Core up to .NET Core 2.0, the runtime selects the result with the greater least significant digit (that is, using MidpointRounding.AwayFromZero).
  • On .NET Core 2.1 and later, the runtime selects the result with an even least significant digit (that is, using MidpointRounding.ToEven).

Note that double is a floating binary point type. They are represented in binary system (like 11010.00110). When double is presented in decimal system it is only an approximation as not all binary numbers have exact representation in decimal system.

Tim Schmelter
  • 450,073
  • 74
  • 686
  • 939