I refactored the code in this class to be in a form that is more friendly to my use cases; one issue that I noticed during testing is that I cannot convert this particular equation to use long
inputs because assigning to the a
and m
variables overflow on the multiplication/subtraction steps. Everything is just peachy when using int
inputs because they can be casted to a long
to prevent overflow. Is there anything one can do to get the proper behavior when the inputs are long
?*
public static Func<int, int> Scale(int inputX, int inputY, int outputX, int outputY) {
if (inputX > inputY) {
var z = inputX;
inputX = inputY;
inputY = z;
}
if (outputX > outputY) {
var z = outputX;
outputX = outputY;
outputY = z;
}
var a = (((((double)inputScaleX) * (outputScaleX - outputScaleY)) / ((long)inputScaleY - inputScaleX)) + outputScaleX);
var m = (((double)(outputScaleY - outputScaleX)) / ((long)inputScaleY - inputScaleX));
return (value) => ((int)((value * m) + a));
}
For example, if I replaced every instance of int
in the function above with long
then result
will have an incorrect value in the following code:
Func<long, long> scaler = Scale(long.MinValue, long.MaxValue, -5, 5);
var result = scaler(long.MaxValue - 3);
The expected result is 4 but the actual result of -9223372036854775808 is not only wrong, it ends up being way outside the defined range of [-5, 5].
*Other than straight up using BigInt
or implementing 64-bit multiplication and division in software; I am already implementing these operations as workaround and am looking for alternative solutions that I haven't yet come across.