Home > Software design >  Numerical accuracy of an integer solution to an exponential equation
Numerical accuracy of an integer solution to an exponential equation

Time:01-13

I have an algorithm that relies on integer inputs x, y and s.
For input checking and raising an exception for invalid arguments I have to make sure that there is a natural number n so that x*s^n=y
Or in words: How often do I have to chain-multiply x with s until I arrive at y.
And more importantly: Do I arrive at y exactly?

This problem can be further abstracted by dividing by x: x*s^n=y => s^n=y/x => s^n=z
With z=y/x. z is not an integer in general, but one can only arrive at y using integer multiplication if y is divisible by x. So this property can be easily tested first and after that it is guaranteed that z is also an integer and now it is down to solving s^n=z.

There is already a question related to that.

There are lots of solutions. Some iterative and some solve the equation using a logarithm and either truncate, round or compare with an epsilon. I am particularly interested in the solutions with logarithms. The general idea is:

def check(z,s):
    n = log(z)/log(s)
    return n == int(n)

Equality comparing floating point numbers does seem pretty sketchy though. Under normal circumstances I would not count that as a general and exact solution to the problem. Answers that suggest this approach don't mention the precision issue and answers that use an epsilon for comparing just take a randomly small number.

I wonder how robust this method (with straight equality) really is, because it seems to work pretty well and I couldn't break it with trial and error. And if it breaks down at some point, how small or large the epsilon has to be.

So basically my question is: Can the logarithm approach be guaranteed to be exact under specific circumstances? E.g. limited integer input range.

I thought about this for a long time now and I think, that it is possible that this solution is exact and robust at least under some circumstances. But I don't have a proof for that.

My line of thinking was:
Can I find a combination of x,y,s so that the chain-multiply just barely misses y, which means that n will be very close to an integer but not quite?
The answer is no. Because x, y and s are integers, the multiplication will also be an integer. So if the result just barely misses y, it has to miss by at least 1.
Well, that is how far I've gotten. My theory is, that choosing only integers makes the calculation very precise. I would consider it a method with good numerical stability. And also with a very specific behaviour regarding stability. So I believe it is possible, that this calculation is precise enough to truncate all decimals. It would be insane if someone could prove or disprove that.

If a guarantee for correctness can be given for a specific value range, I am interested in the general approach, but a fairly applicable range of values would be the positive part of int32 for the integers and double floating point precision.

Testing with an epsilon is also an option, but then the question is how small that epsilon has to be. This is probably related to the "miss by at least 1" logic.

CodePudding user response:

You’re right to be skeptical of floating point. Math libraries typically don’t provide correctly rounded transcendental functions (The Table Maker’s Dilemma), so the exact test is suspect. Indeed, it’s not difficult to find counterexamples (see the Python below).

Since the input z is an integer, however, we can do an error analysis to determine an appropriate epsilon. Using calculus, one can prove a bound

log(z 1) − log(z) = log(1 1/z) ≥ 1/z − 1/(2z2).

If log(z)/log(s) is not an integer, then z must be at least one away from a power of s, putting this bound in play. If 2 ≥ z, s < 231 (have representations as signed 32-bit integers), then log(z)/log(s) is at least (1/231 − 1/263)/log(231) away from integer. An epsilon of 1.0e-12 is comfortably less than this, yet large enough that if we lose a couple of ulps (1 ulp is on the order of 3.6e-15 in the worst case here) to rounding, we don’t get a false negative, even with a rather poor quality implementation of log.

import math
import random

while True:
    x = random.randrange(2, 2**15)
    if math.log(x**2) / math.log(x) != 2:
        print("#", x)
        break
# 19143
  • Related