0

I'm experiencing some rounding issues between .NET Core 3.0 and .NET Framework/.NET Core 2.x.

I've been searching on the web for a while, but I couldn't find the right term to search for, so i'm posting it here.

I wrote the following sample console app to illustrate my problem:

class Program
{
    static void Main(string[] args)
    {
        const double x = 123.4567890 / 3.14159265358979;
        Console.WriteLine(x);

        const double y = 98.76543210 / 3.14159265358979;
        Console.WriteLine(y);

        const double z = 11.2233445566778899 / 3.14159265358979;
        Console.WriteLine(z);

        Console.ReadKey();
    }
}

I ran this program on different frameworks and got the following output:

  • .NET Framework 4.7.2
    • 39,2975164552063
    • 31,4380134506439
    • 3,57250152843761
  • .NET Core 2.0:
    • 39,2975164552063
    • 31,4380134506439
    • 3,57250152843761
  • .NET Core 3.0:
    • 39,2975164552063
    • 31,438013450643936
    • 3,5725015284376096

As you can see, the 3.0 output differs from the first two, and has got more precision starting from the 13th number after the floating point.

I assume that the precision of .NET Core 3.0 is more accurate.

But my case is that I want to migrate from .NET Framework to .NET Core 3.0. Before migrating, I wrote tests for the .Net Framework library to make sure the calculations will give the same output after migrating to .NET Core 3.0 . For that, I just wrote tests like:

//Arrange
const double expectedValue = 0.1232342802302;

//Act
var result = Subject.Calculate();
//Assert
result.Should.Be(expectedValue);

If I migrate the code and run the tests, which I wrote to the .NET Framework, the tests will fail. I got minor differences like

Expected item[0] to be 0.4451391569556069, but found 0.44513915698437145.
Expected result to be -13.142142181869094, but found -13.142142181869062.

My question here is; how do I force to round .NET Core 3.0 in the same way as .NET Framework/.NET Core 2.0 does, so I won't get these minor differences.

And could anyone explain this difference / describe the changes of rounding in .NET Core 3.1 versus .NET Framework?

phuclv
  • 37,963
  • 15
  • 156
  • 475
NielsDePils
  • 241
  • 1
  • 2
  • 15
  • Did it happen to also involve a change from 32-bit to 64-bit? – Andrew Morton Feb 10 '20 at 10:00
  • Is there any particular reason you are not using `Math.PI`? – Jimi Feb 10 '20 at 10:03
  • @FrancescoGimignano not particulary, but this is an illustration to show the differences. And to make sure the Math.PI is the same, I just picked the value from the const – NielsDePils Feb 10 '20 at 10:06
  • @AndrewMorton haven't tried. The old version which I wrote test for is x86. .NetCore isn't working on x86 i guess. Could you provide me more information to dive into? – NielsDePils Feb 10 '20 at 10:13
  • @NielsDePils Yes: [C# rounding differently depending on platform?](https://stackoverflow.com/questions/47302758/c-sharp-rounding-differently-depending-on-platform). – Andrew Morton Feb 10 '20 at 10:18
  • @NielsDePils the difference is only in formatting, it's [known, documented](https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/), and the .NET Core 3.0 code is actually the correct one according to IEEE 754-2008. To get the old behaviour, explicitly specify the number of digits when formatting. Eg `Console.WriteLine("{0:G15}",y)` instead of `Console.WriteLine(y)` – Panagiotis Kanavos Feb 10 '20 at 14:20
  • Check the [IEEE Floating-point](https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0#ieee-floating-point) section in [What's new in .NET Core 3.0](https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0#ieee-floating-point) – Panagiotis Kanavos Feb 10 '20 at 14:22
  • As for `Subject.Calculate();`, post the code. Does it involve formatting and parsing? Or only calculations? Is `0.1232342802302` the correct value according to IEEE 754, or the old wrong default string? What happens if you use `"G17"` in .NET 4.7? – Panagiotis Kanavos Feb 10 '20 at 14:33
  • @PanagiotisKanavos Calculate is pseudo. I can't post that code because it's not open source code. Despite the code, I just want to know what to google for to dig into the problem and find out what's happening on rounding between the different frameworks. What formatting standard is .Net Framework using if .NetCore is using IEEE754-2008? And is there a way to force netcore to use the old standard? – NielsDePils Feb 10 '20 at 19:57
  • @NielsDePils how can anyone answer a question about your code, when you don't post that code? The differences you posted in the first part of your question are purely due to formatting changes. The second part, whatever `Subject.Calculate()` produces, is impossible to answer without knowing what the calculation is. If there is a real difference, you can post a short reproducible example – Panagiotis Kanavos Feb 11 '20 at 07:59
  • @NielsDePils as for the code not being open source, math isn't owned by anyone. Courts have ruled that countless times already. Most analysis/signal processing algorithms are already so old they're out of patent protection, if they ever were. Common coding techniques aren't copyrightable either. No company can claim trade secrets for something used by everyone else already either. – Panagiotis Kanavos Feb 11 '20 at 08:04
  • @NielsDePils Whatever your code does, you can create a short example that reproduces your problem. It's quite possible that the problem has *nothing* to do with .NET differences, and is caused by floating point precision loss – Panagiotis Kanavos Feb 11 '20 at 08:05
  • @PanagiotisKanavos I created a short example which reproduces my problem. However, it differs in particular by changing the platform target from x86 to x64 try: `Math.Atan(-1 / 0.81692893152872526);` in netcore and netframework. – NielsDePils Feb 11 '20 at 12:52

2 Answers2

1

That's bizarre.... I've setup a solution with 4 projects

  • a .NET Framework 4.7.2 project
  • a .NET Core 2.0 project
  • a .NET Core 3.1 project
  • a .NET Core 3.1 project to run all 3 projects at once.

In every project I used the Math.PI constant to see if something changes, and indeed it did, but not how I expect.

If I run the fourth project, the one that calls all 3, I get this result

So the values from all 3 projects are the same. But if I run these separately I get these results:

.NET Framework

.NET Core 2

.NET Core 3

So for some reason I get different results from yours in .NET Core using the Math.PI constants, and these are the same between version 2 an 3.1. However I get the same result as your with .NET Framework, which is different from the two .NET Core. But as we saw above, if you run all 3 projects from another project made in .NET Core, you get the same results, meaning that maybe is the calling project that determines what rounding should be used. Unfortunately I can't find the exact reason why this happens, but if I remember correctly there are some minor differences in how rounding works in Windows versus in Unix systems. Since .NET Core is cross-platform, I think is using the Unix rounding, rather than the Windows one that is probably used by .NET Framework, leading to these differences.

EDIT: This is going beyond science now... I used the constant value of 3.14159265358979 instead of Math.PI, which in theory is the same (according to Microsoft Documentation) . But by using this value the results change again! If you run the test where all 3 projects are running, you still get these same results for all 3, but these are different from the previous run

39,2975164552063
31,438013450643936
3,5725015284376096

When launching the .NET Framework project, you get the same results as before, while running the .NET Core ones you get the above results. So using the constant value, instead of Math.PI, changes the results once again. But this is really non-sense, since under the hood, Math.PI is just a double constant with the 3.14159265358979 value

EDIT 2: I wrote the same program with Python

def main():
    x = 123.4567890 / 3.14159265358979
    print(x)
    y = 98.76543210 / 3.14159265358979
    print(y)
    z = 11.2233445566778899 / 3.14159265358979
    print(z)


if __name__ == "__main__":
    main()

and the results are identical to .NET Core

39.2975164552063
31.438013450643936
3.5725015284376096

I then tried to do the same using Go

package main

import "fmt"

func main() {
    x := 123.4567890 / 3.14159265358979
    fmt.Println(x)
    y := 98.76543210 / 3.14159265358979
    fmt.Println(y)
    z := 11.2233445566778899 / 3.14159265358979
    fmt.Println(z)
}

And in this case the results are the following

39.2975164552063
31.43801345064394
3.5725015284376096

The y has been rounded to ..94, while x and z are the same as python and .NET Core.

As a final test I tried doing this with Javascript/Node.JS

let x = 123.456789 / 3.14159265358979;
console.log(x);
let y = 98.7654321 / 3.14159265358979;
console.log(y);
let z = 11.2233445566778899 / 3.14159265358979;
console.log(z);

But here also the results are the same as python and .Net Core

39.2975164552063
31.438013450643936
3.5725015284376096

Since Python, JS, .NET Core and GO (if you don't consider the y rounding), are cross-platform, I assume there is something tied to the Windows ecosystem that .NET framework relies on. It would be interesting to try with other frameworks/languages tied to Windows, but I don't know any other than .NET Framework (maybe Visual Basic?)

Jimi
  • 1,605
  • 1
  • 16
  • 33
  • 1
    I didn't use the Math.PI constant because to make sure it has the same value in each framework. It could be that the const value in .Net Core differs from .Net Framework on low level. What I got for the const value of Math.PI in NetCore is 3.14159265358979 And in NetFramework: 3.14159265358979. I don't see any difference. But anyway, your answer shows indeed weird results – NielsDePils Feb 10 '20 at 11:04
  • It's weird, but i'm just curious if there's an expert who can explain this issue of rounding. There must be a different rounding strategy in each language. As a software developer, this kind of problems are far above me. This is more like computer science / compiler construction – NielsDePils Feb 10 '20 at 13:46
  • Ohya.. by the way, I just came to the conclusion that I forgot to apply .ToString("R") in my example code.. but that doesn't solve my problem with the case i have – NielsDePils Feb 10 '20 at 13:47
  • That would just round the printed result (not the actual value), but yes, doesn't actually explains or "solve" the difference in round between frameworks/languages – Jimi Feb 10 '20 at 13:51
  • I read this.. more about the `toString` I forgot in my example code. [link](https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/) – NielsDePils Feb 10 '20 at 14:17
  • you must print in more precision (at least 20 digits) to see the actual result – phuclv Nov 09 '20 at 02:11
1

This is a documented change that makes the formatter and parser compliant with IEEE 754-2008. From the IEEE Floating-Point section in the What's new in .NET 3.0 document :

Floating point APIs are being updated to comply with IEEE 754-2008 revision. The goal of these changes is to expose all required operations and ensure that they're behaviorally compliant with the IEEE spec. For more information about floating-point improvements, see the Floating-Point Parsing and Formatting improvements in .NET Core 3.0 blog post.

The examples in the blog post actually address what happened here with Pi (emphasis mine):

ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default.

An example of where it was problematic was Math.PI.ToString() where the string that was previously being returned (for ToString() and ToString("G")) was 3.14159265358979; instead, it should have returned 3.1415926535897931.

The previous result, when parsed, returned a value which was internally off by 7 ULP (units in last place) from the actual value of Math.PI. This meant that it was very easy for users to get into a scenario where they would accidentally lose some precision on a floating-point value when the needed to serialize/deserialize it.

The actual data hasn't changed. The y and z values do have greater precision, even in .NET 4.7. What did change is the formatter. Before Core 3.x, the formatter would use only 15 digits even if the values had greater precision.

The blog post explains how to get the old behavior :

For ToString() and ToString("G") you can use G15 as the format specifier as this is what the previous logic would do internally.

The following code:

const double y = 98.76543210 / 3.14159265358979;
Console.WriteLine(y);
Console.WriteLine("{0:G15}",y);

Will print :

31.438013450643936
31.4380134506439
Community
  • 1
  • 1
Panagiotis Kanavos
  • 120,703
  • 13
  • 188
  • 236
  • Thanks for the explanation @panagiotis. Anyway, my tests doesn't do anything with formatting. I'm trying to assert two double values of comprehensive calculate functions and those are different when I use different CPU-architectures, I just found out that running x64 will give other results compared to x86. – NielsDePils Feb 10 '20 at 20:02
  • @NielsDePils you didn't post any calculation code though. Only formatting code, where the difference is documented. Post whatever is behind `Calculate()` along with sample data – Panagiotis Kanavos Feb 11 '20 at 07:59