2

I'm trying to rewrite part of an old system as a C# program. The old programs where written in C. Both programs read blob files into a byte array and fill an object/struct with the data.

In the original C code this is done with fread()

fread(&myStruct, sizeof(MYSTRUCT), 1, data)
fseek(data, 256, 0)
fread(&nextStruct, sizeof(NEXTSTRUCT), 1, data)

in C# a binary reader is used

using(BinaryReader reader = new BinaryReader(stream)){

  double1 = reader.ReadDouble();
  double2 = reader.ReadDouble();

  reader.BaseStream.Position = 256;

  short1 = reader.ReadInt16();
   ... and so on ...
}

When running the programs most of the time the results are the same but sometimes there are small deviations and for some blobs there are huge deviations.

While debugging the C code with insight I saw that the values after extraction from the blob are not the same.

Examples
In C# I got 212256608402.688 in C 212256608402.68799 for double values
In C# I got 2.337 in C 2.3370000000000001 for short values

What's the reason for this discrepancy and is it fixable?
After some methods summing up all entries (up to a million) and calculation some values, could this lead to a fault of 5% or more? Are there other pitfalls to watch for, that could cause faulty results?

Sam
  • 7,252
  • 16
  • 46
  • 65
  • http://stackoverflow.com/questions/21895756/why-are-floating-point-numbers-inaccurate – Steve Jun 13 '14 at 13:51
  • it's not different. Print more digits after decimal point and you'll see it's the same. Binary floating-point can't store exactly most decimal fractional numbers. The difference is just where (at which digit) the printing function decide to stop – phuclv Jun 13 '14 at 14:10
  • In theory there may even be differences between two runs of the same C# program on the same machine. – CodesInChaos Jun 13 '14 at 14:28

1 Answers1

7

2.3370000000000001 == 2.337 and 212256608402.688 == 212256608402.68799. These strings result in bit-for-bit identical doubles when parsed. double doesn't have enough precision to differentiate those real numbers, they are both rounded to the same value. There is no difference in precision, only a difference in the amount of digits printed.

  • To expand, the difference is only in how the numbers are converted to strings. – Kendall Frey Jun 13 '14 at 13:59
  • So if I use these double values for a calculation the result should be the same? – user3070194 Jun 16 '14 at 07:17
  • @user3070194 The calculations can't very well reach back in time and see where the 64 bits representing the double came from, so yes it should be the same. However, it's worth mentioning that your C++ and C# programs *might* not perform the exact same calculations (w.r.t. rounding and accuracy) even when the source code is superficially the same. –  Jun 16 '14 at 07:21