3

I have a proprietary library in a DLL (I don't have the code) that has been used for years from within VB6. I'm trying to upgrade the VB6 code to C#, and hope to make the C# code exactly replicate the VB6 behavior. I'm having trouble making the double precision results of some calculations done in the DLL match exactly when called from each environment.

In VB6 I have something like this (note file reading and writing is to make sure exact same values are used and generated):

Dim a As Double, b As Double, c As Double, d As Double
Open "C:\input.txt" For Binary As #1
Get #1, , a
Get #1, , b
Get #1, , c
Get #1, , d
Close #1
Dim t As New ProprietaryLib.Transform
t.FindLine a, b, c, d
Open "C:\output.txt" For Binary As #1
Put #1, , t.Slope
Put #1, , t.Intercept
Close #1

In C# I have something like this:

System.IO.BinaryReader br = new System.IO.BinaryReader(System.IO.File.Open(@"C:\input.txt", System.IO.FileMode.Open));
double a, b, c, d;
a = br.ReadDouble();
b = br.ReadDouble();
c = br.ReadDouble();
d = br.ReadDouble();
br.Close();
ProprietaryLib.Transform t = new ProprietaryLib.Transform();
t.FindLIne(a, b, c, d);
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(System.IO.File.Open(@"C:\output2.txt", System.IO.FileMode.Create));
bw.Write(t.Slope);
bw.Write(t.Intercept);
bw.Close();

I have verified that the input is being read identically (verified by re-writing binary values to files), so identical double precision numbers are being fed to the DLL. The output values are very similar, but not identical (values are sometimes off in the least significant parts of the numbers, out in the noise of the 15th-17 decimal place, and binary write out to file verifies that they are different binary values). Does anyone have any advice on why these values might be calculated not quite identically or how I might fix or debug this?

Dennis Traub
  • 50,557
  • 7
  • 93
  • 108
user12861
  • 2,358
  • 4
  • 23
  • 41

1 Answers1

4

This probably happens because of the different standards used for double precision

  • VB6 uses a less precise internal standard by default for (back then) performance reasons.
  • .NET complies with the IEEE 754 standard for binary floating-point arithmetic

You can compile the VB6 application using the /OP option to improve float consistency.

By default, the compiler uses the coprocessor’s 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision (80 bits minus the number of bits in a smaller floating-point type) through a lengthy calculation can produce inconsistent results. (Source: MSDN)

Dennis Traub
  • 50,557
  • 7
  • 93
  • 108
  • Do you have any references for this "MS double" standard (even to discuss it's existence)? And why is it that the binary representations of the inputs exactly match and the outputs are off by only a few bits? Are the representations extremely similar or something? – user12861 Apr 13 '12 at 20:00
  • @user12861 edited my answer, made it more precise and added a reference. – Dennis Traub Apr 13 '12 at 20:09
  • 3
    +1. Although I disagree with a couple of details. The default VB6 internal representation is *more* precise than a double, not less precise! 80 > 64. Furthermore compiler optimisations like this can still be very relevant nowadays in computationally intensive programs. – MarkJ Apr 13 '12 at 20:22
  • 2
    VB6 has no /OP switch, the reference is to a VC article. – Bob77 Apr 13 '12 at 21:30
  • I'm going to do some more investigating, but this does seem likely. I guess this means that there is no way to match the default vb6 behavior in c#. Disappointing. – user12861 Apr 13 '12 at 22:52