I'm creating a simple math function to compare two numbers using .Net Framework 4.7.2
The original function is this one
public static bool AreNumbersEquals(float number, float originalNumber, float threshold) =>
(number >= originalNumber - threshold) && (number <= originalNumber + threshold);
But to my surprise when I test it using this statement
var result = AreNumbersEquals(4.14f, 4.15f, 0.01f);
the returned value is false
so I split the function using this code
namespace ConsoleApp1_netFramework
{
internal class Program
{
static void Main(string[] args)
{
var qq = AreNumbersEquals(4.14f, 4.15f, 0.01f);
}
public static bool AreNumbersEquals(float number, float originalNumber, float threshold)
{
var min = originalNumber - threshold;
var max = originalNumber + threshold;
var minComparison = number >= min;
var maxComparison = number <= max;
// result1 is true (as expected)
var result1 = minComparison && maxComparison;
// result2 is false (why?)
var result2 = number >= originalNumber - threshold && number <= originalNumber + threshold;
return result2;
}
}
}
now result1 is true as expected but result2 is false
Can anyone explain this?
Update 1: I understand the way floating point numbers and arithmetic work at CPU level. I'm interested in this particular case because at high level the computations are the same so I expected the same result in both way of writing the comparison.
The current project I'm working on is a game so double and decimal are avoided as much as possible due to the performance penalty involved in arhithmetic computations.
Update 2: When compiled for 64 bits architecture the condition returns true but when compiled for 32 bits archiecture the condition returns false.