Home > OS >  Is storing currency in Java double (floating point), without any math, always accurate?
Is storing currency in Java double (floating point), without any math, always accurate?

Time:09-28

Of course no math should be done, because the outcome will not be the accurate. Floating point values are not suitable for this.

But what about just storing values? Personally, I'd go for String or Long, but it looks like I might sometimes be forced to interact with systems that insist on floating point types.

It looks like values from 0.00 to 2.00 are 100% accurate - see code below. But is this so? And why? Shouldn't there be problems already when I simply do double v = 0.01?

public static void main(final String[] args) {

    final DecimalFormat df = new DecimalFormat("0.0000000000000000000000000", DecimalFormatSymbols.getInstance(Locale.US));
    final BigDecimal aHundred = new BigDecimal("100");
    final BigDecimal oneHundredth = BigDecimal.ONE.divide(aHundred);
    for (int i = 0; i < 200; i  ) {
        BigDecimal dec = oneHundredth;

        for (int ii = 0; ii < i; ii  ) {
            dec = dec.add(oneHundredth);
        }

        final double v = dec.doubleValue();

        System.err.println(v);
        System.err.println(df.format(v));
    }
    System.exit(0);
}

Output:

0.01
0.0100000000000000000000000
0.02
0.0200000000000000000000000
0.03
0.0300000000000000000000000
...
1.38
1.3800000000000000000000000
1.39
1.3900000000000000000000000
1.4
1.4000000000000000000000000
1.41
1.4100000000000000000000000
...
1.99
1.9900000000000000000000000
2.0
2.0000000000000000000000000

CodePudding user response:

Is storing currency in Java double (floating point), without any math, always accurate?

If you represent the currency values as a multiple of the smallest unit of currency (for instance, cents), then you have effectively 53 bits of precision to work with ... which works out at 9.0 x 1015 cents, or 9.0 x 1013 dollars.

(For scale the US national debt is currently around 2.8 x 13 dollars.)

And if you try to represent currency values in (say) floating point dollars (using double), then most cent values simply cannot be represented precisely. Only multiples of 25 cents have a precise representation in binary floating point.

In short, it is potentially imprecise even if you are not performing arithmetic on the values.

CodePudding user response:

Converting from decimal to binary-based floating-point or vice-versa is math. It is an operation that rounds the result to the nearest representable value.

When you convert .01 to double, the result is exactly 0.01000000000000000020816681711721685132943093776702880859375. Java’s default formatting for displaying this may show it as “0.01”, but the actual value is 0.01000000000000000020816681711721685132943093776702880859375.

The precision of Java’s double format is such that if any decimal numeral with at most 15 significant decimal digits is rounded to the nearest representable double and then that double is rounded to the nearest decimal numeral with 15 significant digits or fewer, the result will be the original number.

Therefore, you can use a double to store any decimal numeral with at most 15 significant digits (within the exponent range) and can recover the original numeral by converting it back to decimal. Beyond 15 digits, some numbers will be changed by the round trip.

  • Related