0

I'm creating a simple math function to compare two numbers using .Net Framework 4.7.2

The original function is this one

public static bool AreNumbersEquals(float number, float originalNumber, float threshold) => 
(number >= originalNumber - threshold) && (number <= originalNumber + threshold);

But to my surprise when I test it using this statement

var result = AreNumbersEquals(4.14f, 4.15f, 0.01f);

the returned value is false

so I split the function using this code

namespace ConsoleApp1_netFramework
{
    internal class Program
    {
        static void Main(string[] args)
        {
            var qq = AreNumbersEquals(4.14f, 4.15f, 0.01f);
        }

        public static bool AreNumbersEquals(float number, float originalNumber, float threshold)
        {
            var min = originalNumber - threshold;
            var max = originalNumber + threshold;
            var minComparison = number >= min;
            var maxComparison = number <= max;

            // result1 is true (as expected)
            var result1 = minComparison && maxComparison;

            // result2 is false (why?)
            var result2 = number >= originalNumber - threshold && number <= originalNumber + threshold;

            return result2;
        }
    }
}

now result1 is true as expected but result2 is false

Can anyone explain this?

Update 1: I understand the way floating point numbers and arithmetic work at CPU level. I'm interested in this particular case because at high level the computations are the same so I expected the same result in both way of writing the comparison.

The current project I'm working on is a game so double and decimal are avoided as much as possible due to the performance penalty involved in arhithmetic computations.

Update 2: When compiled for 64 bits architecture the condition returns true but when compiled for 32 bits archiecture the condition returns false.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
vcRobe
  • 1,671
  • 3
  • 17
  • 35
  • 2
    Since you didn't mention that you already know that floating-point numbers cannot represent (most) decimal numbers accurately, let me recommend the following introductory related question: https://stackoverflow.com/q/588004/87698 – Heinzi Apr 21 '22 at 14:24
  • Out of curiosity what happens if you compile into 32-bit? – Charlieface Apr 21 '22 at 15:57
  • There is a `float.Epsilon` that you can use as delta for your calculation. What is the result, if your method looks like this `public static bool AreNumbersEquals(float number, float originalNumber, float threshold) => (number >= originalNumber - threshold - float.Epsilon) && (number <= originalNumber + threshold + float.Epsilon);` – Demetrius Axenowski Apr 21 '22 at 16:15
  • @Charlieface the unexpected result is when compiled for 32 bits architecture – vcRobe Apr 21 '22 at 18:47
  • @DemetriusAxenowski that epsilon is too tiny and the results are the same – vcRobe Apr 21 '22 at 18:48

1 Answers1

7

Can anyone explain this?

Yes. For result1, you're assigning intermediate results to a float variable, which forces it back to 32 bits - potentially truncating the result. (It's possible that as these are local variables, it would be possible that the results wouldn't be truncated. The specs are tricky on this part.)

For result2, you're performing the comparisons "inline" which allows all the arithmetic - and the comparison - to be done at a higher precision, potentially changing the results.

Fundamentally, 4.14f, 4.15f and 0.01f are not precisely 4.14, 4.15 and 0.01... so anything that assumes they will be is likely to have some subtle problems. The precise values of those floating point literals are:

  • 4.139999866485595703125
  • 4.150000095367431640625
  • 0.00999999977648258209228515625

As you can see, if you did the arithmetic by hand using those values, you would indeed find that the number is beyond the threshold. It's the loss of precision in intermediate values that makes the difference in your first test.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • I created the same function using double instead of float in the parameters type and in this case min = 4.1400000000000006 (in both x86 and x64 platform targets) so the only explanation possible here is what you said about how the comparison is performed for result2 (they're doing it using double instead of float). I think this is no consistent, the compiler should perform the comparison using floats so both results are equals. Thank you for your help! – vcRobe Apr 22 '22 at 13:49
  • @vcRobe: You could observe it even when using `double`, because within an expression it can use 80-bit precision. See https://github.com/dotnet/csharpstandard/blob/draft-v7/standard/types.md#837-floating-point-types. As for "the compiler should perform the comparison [...]" - dependency on the truncation is *usually* a red flag in terms of correctness, and it would come with a performance penalty. I certainly don't think that valid code should become slower in order to accommodate these expectations. – Jon Skeet Apr 22 '22 at 14:03