Are these two C# methods completely deterministic - as in they produce same result across all platforms?
Fix64
is struct
that has rawValue
field of type long
.
ONE
is a constant defined like this const long ONE = 1L << 32;
Function 1:
public static explicit operator Fix64(double value) {
return new Fix64((long)(value * ONE));
}
Fix64
constructor that takes in a long
value just assigns it to rawValue
field. Operation in question here is the multiplication. ONE
is going to be converted to double
. Then two double
values are going to be multiplied. This can happen at higher precision according to C# specifications. Result is then truncated by long
cast. Is there any chance for the least significant bit of the resulting long
value to be different, if different precision are used for multiplication on different platforms? Or is this method completely deterministic?
Function 2:
public static explicit operator double(Fix64 value) {
return (double)value.rawValue / ONE;
}
This is similar to 1st example. Just that here we have division operation between double
s and that we return result as a double
. Is it possible that if we compare result of this method with another double
, compiler can leave resulting double
in higher precision during that comparison?
Would another cast ensure that this comparison will always be deterministic?
(double)((double)value.rawValue / ONE)
EDIT: These two functions convert between FixedPoint64 type and double type. The argument here is that by doing only single operation we are not using extended intermediate floating point value for additional operation. Thus by immediately truncating result to standard precision, calculation is supposed to be deterministic. Or are there any flaws in that logic?