How can I understand this another floating-point paradox --- 0.1 represented as double is more accurate than 0.1 represented as long double?
In [134]: np.double(0.1)
Out[134]: 0.1
In [135]: np.longdouble(0.1)
Out[135]: 0.10000000000000000555
CodePudding user response:
It's not more accurate. The longdouble repr is just showing you more of the inaccuracy that was already present.
0.1
is a Python float, which has the same precision as numpy.double
. It does not represent the exact decimal value 0.1, because binary cannot represent that value in a finite number of bits. 0.1
represents this value:
>>> import decimal
>>> decimal.Decimal(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
which is the closest value to 0.1 that can be represented within the limits of the type's precision.
When you construct a numpy.double
or numpy.longdouble
from 0.1
, this is the value you get. For numpy.longdouble
, this is not the best approximation of 0.1 the type could store.
The repr
of both numpy.double
and numpy.longdouble
show the minimum number of decimal digits needed to produce an output that will reproduce the original value if converted back to the original type. For numpy.double
, that's just "0.1"
, because 0.1
was already the closest double-precision floating point value to 0.1. For numpy.longdouble
, it requires more digits, because numpy.longdouble
has more precision, so it can represent values closer to 0.1 than 0.1
.
If you want the best long double approximation of 0.1, you should pass a string instead of a Python float:
numpy.longdouble('0.1')