The facts that:
- this function returns different output in C++ versus C# given normal program input, and
- this function returns identical output in C++ versus C# given controlled identical input
suggests:
- the normal program inputs to this function are different in C++ versus C#.
Regarding the latter, in a comment the OP states “I also created a sample test application in C++ and C# and hard coded the input. By hard coding the input to doubleToInt function, I am getting same results.” This suggests that, given identical inputs, the C++ and C# versions of the function return identical outputs. We would deduce from this that the cause of different outputs is different inputs.
The OP also states ”While debugging, to compare the results, if I see the output of C++ and C#, it is different for the same set of values.“ However, this is inconclusive, because debuggers and print statements used for debugging often do not print the complete, exact value of floating-point objects. Quite often, they round to six significant digits. For example, a simple std::cout << x
displays both 10000.875 and 10000.9375 as “10000.9”, but they are different numbers and would yield different outputs in doubleToInt
.
In conclusion, the problem may be that earlier work in the program, before doubleToInt
is called, experiences floating-point rounding or other errors and passes different values to doubleToInt
in the C++ and C# versions. To test for this, print the exact inputs to doubleToInt
and see if they differ in the two versions.
Printing the inputs exactly might be done with:
- Use the
%a
format if your implementation supports it. (This is a C feature for printing floating-point values in hexadecimal floating-point notation. Some C++ libraries support it when printf
is used.)
- Set the precision very high and print, as with
std::cout.precision(100)
. Some C++ implementations may still not print the exact value (which is a quality issue), but they should print enough digits to distinguish the exact value from neighboring double
values.
- Print the bytes of the representation of the value (by converting a pointer to the floating-point object to a pointer to
unsigned char
and printing the individual char
objects).
Based on the code presented, the problem is unlikely to be floating-point issues in doubleToInt
. The language definitions permit some slack in floating-point evaluation, so it is theoretically possible that d+.1
is evaluated with excess precision, instead of normal double
precision, and then converted to int
or short
. However, this would result in different results only in very rare cases, where d+.1
evaluated in double
precision rounds up to an integer but d+.1
evaluated in excess precision remains just below the integer. This requires that about 38 bits (53 bits in the double
significand minus 16 bits in the integer portion plus one bit for rounding) have specific values, so we would expect it to occur only about 1 in 275 billion times by chance (assuming a uniform distribution is a suitable model).
In fact, the adding of .1 suggests to me that somebody was trying to correct for floating-point errors in a result they expected to be an integer. If somebody had a “natural” value they were trying to convert to an integer, the usual way to do it would be to round to the nearest value (as with std::round
) or, sometimes, to truncate. Adding .1 suggests they were trying to calculate something they expected to be an integer but were getting results like 3.999 or 4.001, due to floating-point errors, so they “corrected” it by adding .1 and truncating. Thus, I suspect floating-point errors exist earlier in the program. Perhaps they are exacerbated in C#.