I was writing a code to calculate number of digits in a given whole number.
I was initially using
math.log(num,10)
but found out it was giving incorrect(approximate) value at num = 1000
math.log(1000,10)
>2.9999999999999996
I understand that the above might be due to the floating point arithmetic in computers being done differently but the same, however, works flawlessly using math.log10
math.log10(1000)
>3.0
Is it correct to assume that log10
is more accurate than log
and to use it wherever log base 10 is involved instead of going with the more generalized log
function?
CodePudding user response:
Python's math documentation specifically says:
math.log10(x)
Return the base-10 logarithm of x. This is usually more accurate than log(x, 10).
CodePudding user response:
According to the Python Math module documentation:
math.log(x,[base])
With one argument, return the natural logarithm of x (to base e). With two arguments, return the logarithm of x to the given base, calculated aslog(x)/log(base)
.
Whereas in the math.log10
section:
math.log10(x)
Return the base-10 logarithm of x. This is usually more accurate thanlog(x, 10)
.
It might be due to the inacurracy in floating point arithmetic.
Because,
If I take the first method of using log(1000)/log(10)
, I get:
>>> log(1000)
6.907755278982137
>>> log(10)
2.302585092994046
>>> 6.907755278982137/2.302585092994046
2.9999999999999996