Home > Net >  Unexpected result in a simple expression
Unexpected result in a simple expression

Time:04-22

I'm creating a simple math function to compare two numbers using .Net Framework 4.7.2

The original function is this one

public static bool AreNumbersEquals(float number, float originalNumber, float threshold) => 
(number >= originalNumber - threshold) && (number <= originalNumber   threshold);

But to my surprise when I test it using this statement

var result = AreNumbersEquals(4.14f, 4.15f, 0.01f);

the returned value is false

so I split the function using this code

namespace ConsoleApp1_netFramework
{
    internal class Program
    {
        static void Main(string[] args)
        {
            var qq = AreNumbersEquals(4.14f, 4.15f, 0.01f);
        }

        public static bool AreNumbersEquals(float number, float originalNumber, float threshold)
        {
            var min = originalNumber - threshold;
            var max = originalNumber   threshold;
            var minComparisson = number >= min;
            var maxComparisson = number <= max;

            // result1 is true (as expected)
            var result1 = minComparisson && maxComparisson;

            // result2 is false (why?)
            var result2 = number >= originalNumber - threshold && number <= originalNumber   threshold;

            return result2;
        }
    }
}

now result1 is true as expected but result2 is false

Can anyone explain this?

Update 1: I understand the way floating point numbers and arithmetic work at CPU level. I'm interested in this particular case because at high level the computations are the same so I expected the same result in both way of writing the comparisson.

The current project I'm working on is a game so double and decimal are avoided as much as possible due to the performance penalty involved in arhithmetic computations.

Update 2: When compiled for 64 bits architecture the condition returns true but when compiled for 32 bits archiecture the condition returns false.

CodePudding user response:

Can anyone explain this?

Yes. For result1, you're assigning intermediate results to a float variable, which forces it back to 32 bits - potentially truncating the result. (It's possible that as these are local variables, it would be possible that the results wouldn't be truncated. The specs are tricky on this part.)

For result2, you're performing the comparisons "inline" which allows all the arithmetic - and the comparison - to be done at a higher precision, potentially changing the results.

Fundamentally, 4.14f, 4.15f and 0.01f are not precisely 4.14, 4.15 and 0.01... so anything that assumes they will be is likely to have some subtle problems. The precise values of those floating point literals are:

  • 4.139999866485595703125
  • 4.150000095367431640625
  • 0.00999999977648258209228515625

As you can see, if you did the arithmetic by hand using those values, you would indeed find that the number is beyond the threshold. It's the loss of precision in intermediate values that makes the difference in your first test.

CodePudding user response:

originalNumber - threshold will perform a floating-point calculation. In your case that means 4.15f - 0.01f, which is likely not exactly 4.14f, but something like 4.13999996 (didn't check that number, it's just a hard guess).

To fix this just make your threashold a tiny bit bigger:

threshold  = 0.000001;
(number >= originalNumber - threshold) && (number <= originalNumber   threshold);
  • Related