Home > database >  Simple example for floating point accuracy
Simple example for floating point accuracy

Time:09-12

I wanted to understand the issues with accuracy while storing 'currency' as float. I could understand the theory (to a good extend), but wanted a concrete example so that I can demonstrate to my colleagues

I tried the following examples

1)C# port from the example in medium

static void Main(string[] args)
{
    double total = 0.2;
    for (int i = 0; i < 100; i  )
    {
        total  = 0.2;
    }
    Console.WriteLine("total = "   total); //Output is exactly 20.2 in both debug and run (release config) mode
    Console.ReadLine(); 
}

2)John Skeet's example from C# in depth

using System;

class Test
{
    static float f;

    static void Main(string[] args)
    {
        f = Sum (0.1f, 0.2f);
        float g = Sum (0.1f, 0.2f);
        Console.WriteLine (f==g); //Output is true always for run and debug (release mode)
    }

    static float Sum (float f1, float f2)
    {
        return f1 f2;
    }
}

Examples were run on .NET Framework 4.7.2 on Windows 11 OS. But as you see in the comments near the Console.WriteLine, I couldn't reproduce the issues with float datatype. What am I missing here?

Can I get some concrete examples to prove the theory in .NET?

CodePudding user response:

Here is an example:

Mac_3.2.57$cat floatFail.c
#include <stdio.h>

int main(void){
    float a = 0.1;
    float b = 1000000;

    printf("b=0.100f\n", b);
    printf("a=0.100f\n", a);
    printf("a b=0.100f\n", (b a));
    printf("(b a)*1000.100f\n", (b a)*100);
    printf("b*100=0.100f\n", b*100);
    printf("100*a=0.100f\n", 100*a);
    printf("(b a)*100 - b*100=0.100f\n", (b a)*100 - b*100);

    return(0);
}
Mac_3.2.57$cc floatFail.c
Mac_3.2.57$./a.out 
b=1000000.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
a=0.1000000014901161193847656250000000000000000000000000000000000000000000000000000000000000000000000000
a b=1000000.1250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
(b a)*100100000016.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
b*100=100000000.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
100*a=10.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
(b a)*100 - b*100=16.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Mac_3.2.57$

CodePudding user response:

As for an example, try adding up 1.0/N N times where N is 3, 7, 11, 13, etc. (Some prime, but not 2 or 5.) Print the sum with sufficient output to distinguish its value. Use hexadecimal output or usually 17 significant decimal digits.


The issue you want to demonstrate is not special to money. The same issue applies to all floating point math. Used incorrectly, FP fails for money as well FP also fails for other applications. Used correctly, FP works for money as well FP also works for other applications.

With FP and money, use 1.0 not to represent a major unit of currency, but to represent a decimal fraction of the smallest unit. float lacks precision for this scaling. Use double.

Example: This may be be 1¢ cent instead of $1, or maybe 1/100 of a cent, depending on coding requirements of the application.

Instead of compare like f==g, code typically needs to round to the nearest unit first, then compare.

If the language supports decimal floating point, this is easier than binary floating point, but either works. Binary floating point simply takes more care to perform the exacting decimal requirements of money.

GTG

  • Related