15

After Windows has updated, some calculated values have changed in the last digit, e.g. from -0.0776529085243926 to -0.0776529085243925. The change is always down by one and both even and odd numbers are affected. This seems to be related to KB4486153, as reverting this update changes the values back to the previous ones.

This change can be seen already when debugging in Visual Studio and hovering over the variable. The value is later written to an output file and changes therein as well (without running the debugger).

Minimal reproducible example

var output = -0.07765290852439255;
Trace.WriteLine(output); // This printout changes with the update.

var dictionary = new Dictionary<int, double>();
dictionary[0] = output; // Hover over dictionary to see the change in debug mode

Background

The calculated value comes from

output[date] = input[date] / input[previousDate] - 1;

Disregarding the loss of precision in floating-point arithmetic, I can do the calculation in the Immediate window and get -0.07765290852439255 both before and after the upgrade.

However, when hovering over the output variable , I see
{[2011-01-12 00:00:00, -0.0776529085243926]} before the upgrade and
{[2011-01-12 00:00:00, -0.0776529085243925]} after, and this difference is also propagated to an output file.

It seems like the calculated value is the same before and after the update, but its representation is rounded differently.

The input values are

{[2011-01-11 00:00:00, 0.983561000400506]} 
{[2011-01-12 00:00:00, 0.907184628008246]}

Target framework is set to .NET Framework 4.6.1

Question

Is there something I can do to get the previous behaviour while keeping the updates?

I know about loss of precision in floating-point calculations, but why does this change happen after an update and how can we guarantee that future updates don't change the representation of values?

KB4486153 is an update for Microsoft .NET Framework 4.8, see https://support.microsoft.com/en-us/help/4486153/microsoft-net-framework-4-8-on-windows-10-version-1709-windows-10-vers

Jaroslav K
  • 327
  • 1
  • 13
  • Just based on the different output values you listed in your question, it looks like this update switched from rounding to truncation for this type of math problem. – Luke Aug 21 '19 at 14:08
  • Please show the true encoding of the value (preferably in hexadecimal), not its value converted to a string. I think you will find that the calculated value has not changed in the slightest. – Ben Voigt Aug 21 '19 at 14:21
  • @BenVoigt The calculated value has not changed and it is indeed the string representation I am concerned about as it is ultimately written to an output file. – Jaroslav K Aug 21 '19 at 14:48
  • Are you sure it is the windows version. There are differences with the Microprocessors (some have bugs) so you may get difference on a different PC. There were also patches generated by PC vendors to correct the errors. So having the wrong patches on the PC may give differences. Also compiling for Debug and Release are sometimes different. – jdweng Aug 21 '19 at 14:53
  • @jdweng I suspect a particular update, as reverting it also reverts the behaviour. – Jaroslav K Aug 21 '19 at 14:55
  • @JaroslavK: Then the conversion to string needs to be part of your minimal example, because the behavior of the code you have now doesn't change, not even a little. – Ben Voigt Aug 21 '19 at 15:27
  • 1
    I think your minimal example is `double v = -0.07765290852439255; Trace.WriteLine(v);` is it not? – Ben Voigt Aug 21 '19 at 15:28
  • Or even better, initialize the value using [`BitConverter.Int64BitsToDouble`](https://learn.microsoft.com/en-us/dotnet/api/system.bitconverter.int64bitstodouble) Then it's clear the difference is in the string conversion from the exact same numeric representation. – Ben Voigt Aug 21 '19 at 15:35
  • @BenVoigt Thank you! I have updated the example, and verified it by installing the update once again. – Jaroslav K Aug 21 '19 at 16:43
  • 1
    The best way to achieve stability is to let the binary to decimal conversion be correctly rounded. Every other conversion is arbitrary and subject to arbitrary changes. – aka.nice Aug 22 '19 at 07:01

1 Answers1

4

OP is encountering one of the common problems with floating-point math. With new software, does one want consistent answers or the best answer? (The upgrade is better)


Some info to help advance the issue.

var output = -0.07765290852439255;

With common binary64 encoding1, due to the binary nature of binary64, output takes on the exact value of

-0.077652908524392 54987160666132695041596889495849609375

The below shows the prior and next possible double too as hexadecimal and decimal FP.

                      -0.077652908524392 5
-0x1.3e10f9e8d3217p-4 -0.077652908524392 53599381885351249366067349910736083984375
-0x1.3e10f9e8d3218p-4 -0.077652908524392 54987160666132695041596889495849609375
                      -0.077652908524392 55
-0x1.3e10f9e8d3219p-4 -0.077652908524392 56374939446914140717126429080963134765625
                      -0.077652908524392 6

The best rounded-to-nearest value of -0.077652908524392 55 (which is encoded exactly as -0.077652908524392 5498...) to one less digit is then -0.077652908524392 5. After the upgrade, code is printing the better answer - at least in this singular case.

I do not see this as a rounding change as much as an improved conversion to text.

Is there something I can do to get the previous behaviour while keeping the updates?

Perhaps, yet it looks like the update presents a better result.

how can we guarantee that future updates don't change the representation of values

Use hexadecimal floating point output (as with "%a" in C) is one approach to gain don't change the representation, yet non-decimal output is unfamiliar.


1 With other encodings, the exact value may have been closer to -0.077652908524392 6 than -0.077652908524392 5.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • IMO, those debuggers should print the minimal number of decimals that will round to same float, like any decent REPL nowadays. Every two different float should have a different printed signature, or the output is not completely useful... – aka.nice Aug 22 '19 at 08:08
  • @aka.nice Agree about sufficient decimal digits. [Printf width specifier to maintain precision of floating-point value](https://stackoverflow.com/q/16839658/2410359) may be useful. – chux - Reinstate Monica Aug 22 '19 at 12:07
  • 1
    Thanks! The "solution" for me here is to accept the new behavior. – Jaroslav K Aug 23 '19 at 16:26