8

If JavaScript's Number and C#'s double are specified the same (IEEE 754), why are numbers with many significant digits handled differently?

var x = (long)1234123412341234123.0; // 1234123412341234176   - C#
var x =       1234123412341234123.0; // 1234123412341234200   - JavaScript

I am not concerned with the fact that IEEE 754 cannot represent the number 1234123412341234123. I am concerned with the fact that the two implementations do not act the same for numbers that cannot be represented with full precision.

This may be because IEEE 754 is under specified, one or both implementations are faulty or that they implement different variants of IEEE 754.

This problem is not related to problems with floating point output formatting in C#. I'm outputting 64-bit integers. Consider the following:

long x = 1234123412341234123;
Console.WriteLine(x); // Prints 1234123412341234123
double y = 1234123412341234123;
x = Convert.ToInt64(y);
Console.WriteLine(x); // Prints 1234123412341234176

The same variable prints different strings because the values are different.

phuclv
  • 37,963
  • 15
  • 156
  • 475
Hans Malherbe
  • 2,988
  • 24
  • 19
  • Why do you cast to long? – Alex Sikilinda May 27 '15 at 07:01
  • You are comparing a double with a float I think. if you want to be able to calculate with big big numbers I suggest you use http://mikemcl.github.io/decimal.js/ and providing string type numbers to get the number of digits you want. This doesn't answer your question but might help a bit if you are facing an issue. Large numbers usually get saved as a formula value and rounding errors can occur due to that. That's why floating points aren't precise. You might have stumbled into a large number rounding imprecision in javascript. As to the reason maybe look into https://code.google.com/p/v8/ – Tschallacka May 27 '15 at 07:20
  • There is another interesting problem here... The .NET normally only shows 15 digits of precision, instead of the full 17. – xanatos May 27 '15 at 07:22
  • `(1234123412341234123.0).ToString("F")` may demonstrate the issue more accurately (outputs `1234123412341230000.00`). But as [this answer](http://stackoverflow.com/a/1658420/1324033`) describes, C# will round the number to 15sf first – Sayse May 27 '15 at 07:44
  • I don't think this problem has anything to do with floating point output formatting. The long cast takes care of that. The loss of precision occurs when the double literal is parsed into a double temporary variable. Consider the output of (long)1123412341234125.0. There's no rounding here even though it is a 16 digit number that started life as a double. – Hans Malherbe May 27 '15 at 08:18
  • 1
    @Sayse That answer is partially false... Try `(1234123412341234123.0).ToString("G17")` (https://ideone.com/xdfQD6). It is the `F` formatter that is "borked" – xanatos May 27 '15 at 12:02

3 Answers3

3

There are multiple problems here...

You are using long instead of double. You would need to write:

double x = 1234123412341234123.0;

or

var x = 1234123412341234123.0;

The other problem is that .NET rounds doubles to 15 digits before converting it to string (so before for example printing it with Console.ToString()).

For example:

string str = x.ToString("f"); // 1234123412341230000.00

See for example https://stackoverflow.com/a/1658420/613130

Internally the number is still with 17 digits, only it is shown with 15.

You can see it if you do a:

string str2 = x.ToString("r"); // 1.2341234123412342E+18
Community
  • 1
  • 1
xanatos
  • 109,618
  • 12
  • 197
  • 280
2

The numbers are not handled differently, they are only displayed differently.

.NET displays the number with 15 significant digits, while JavaScript displays it with 17 significant digits. The representation of a double can hold 15-17 significant digits, depending on what number it contains. .NET only shows the number of digits that the number is guaranteed to always support, while JavaScript shows all digits but the precision limitation may show up.

.NET starts using scientific notation when the exponent is 15, while JavaScript starts using it when the exponent is 21. That means that JavaScript will display numbers with 18 to 20 digits padded with zeroes at the end.

Converting the double to a long in your example will circumvent how .NET displays doubles. The number will be converted without the rounding that hides the precision limitation, so in this case you see the actual value that is in the double. The reason that there isn't just zeroes beyond the 17th digit is that the value is stored in binary form, not decimal form.

Guffa
  • 687,336
  • 108
  • 737
  • 1,005
1

Disclaimer: This is based off the standards wikipedia page

According to the wikipedia page for the standard, it defines that a decimal64 should have 16 digits of precision.

Past that it is the implementors decision as to what should be done with the additional digits.

So you may say that the standard is underspecified but then these numbers aren't designed to fit within the standards specifications anyway. Both languages have ways of handling bigger numbers so these options may be better suited for you.

Sayse
  • 42,633
  • 14
  • 77
  • 146
  • 16 digit precision being mandated seems to imply both .NET and Chrome are broken since the number 9111234123412341 gets changed to 9111234123412340 in both implementations. – Hans Malherbe May 27 '15 at 10:14
  • @HansMalherbe - But you are judging this based on their string representation rather than the actual number (In C# at least they get displayed to [15 sf](http://stackoverflow.com/a/1658420/1324033)) – Sayse May 27 '15 at 10:17