Consider a simple code snippet:
let a = 0.1;
let b = 0.2;
let c = 0.3;
let d = a b;
console.log(c.toString()); //0.3
console.log(d.toString()); //0.30000000000000004
The explanation I could find was that 0.3
cannot be exactly represented in double precision floating point number. But if that is true, how can c
hold the value 0.3
without losing precision in case of direct assignment?
CodePudding user response:
Neither can be represented precisely. Use this converter to see what's going on.
JavaScript numbers are represented as double-precision 64-bit numbers (IEEE754)
0.3 alone gets interpreted as 0x3FD3333333333333
, which is, if you calculate on hand:
.299999999999999988897769753748...
When displayed as a decimal, this gets rounded off to 0.3. It's not that the value stored is equal precisely to 0.3 (which can be shown if you try to do more calculations with it), but that, when displayed, it's shown as 0.3 - it's closer to 3 than it is to the next bit, which is at 2.9999999999999993
.
0.1 0.2 is off by a bit - the last digit (in hex) is 4, not 3. 0x3FD3333333333334
is:
.300000000000000044408920985006...
Similarly, you can't use 0.30000000000000002 with direct assignment or direct logging, because that's between the two bits, and the interpreter must choose one or the other:
console.log(0.30000000000000002);
<iframe name="sif1" sandbox="allow-forms allow-modals allow-scripts" frameborder="0"></iframe>