In Floating point linear interpolation one user proposed this implementation of lerp
:
float lerp(float a, float b, float f)
{
return (a * (1.0 - f)) (b * f);
}
While another user proposed this implementation of a lerp
:
float lerp(float a, float b, float f)
{
return a f * (b - a);
}
Apparently the latter implementation is inferior due to floating point precision loss.
I then looked on Wikipedia and it said for the former implementation:
// Precise method, which guarantees v = v1 when t = 1. This method is monotonic only when v0 * v1 < 0.
// Lerping between same values might not produce the same value
and for the latter:
// Imprecise method, which does not guarantee v = v1 when t = 1, due to floating-point arithmetic error.
// This method is monotonic. This form may be used when the hardware has a native fused multiply-add instruction.
This gives me a few questions:
Wikipedia states that "Lerping between same values might not produce the same value" I thought that except for floating point accuracy the functions are identical. Isn't
(a * (1.0 - f)) (b * f)=a f * (b - a)
a mathematical identity?
If not what would be values that would give different results in both functions?
What does Wikipedia mean by monotonic? Why is one implementation monotonic while the other is not?
Are there other common implementations of
lerp
?
Edit: If the latter implementation suffers from floating point imprecision issues while not being really faster why does it even exist?
CodePudding user response:
Wikipedia states that "Lerping between same values might not produce the same value" I thought that except for floating point accuracy the functions are identical. Isn't
(a * (1.0 - f)) (b * f)=a f * (b - a)
a mathematical identity?
If not what would be values that would give different results in both functions?
a • (1.0−f) (b • f) = a f • (b−a) is a mathematical identity, but the result of computing a*(1.0-f) b*f
is not always equal to the result of computing a f*(b-a)
.
For example, consider a floating-point format with a decimal base and three digits in the significand. Let a
be 123, b
be 223, and f be .124.
Then 1.0-f
is .876. Then a * .876
would have real-number-arithmetic result 107.748, but 108 is produced since the result must be rounded to three significand digits. For b * f
, we would have 27.652, but 27.7 is produced. Then 108 27.7
would produce 135.7 but produces 136.
On the other side, b-a
produces 100. Then f*100
produces 12.4. Then a 12.4
would produce 135.4 but produces 135.
So the results of the computations of the left and right sides, 136 and 135, are not equal.
- What does Wikipedia mean by monotonic?
A function f(x) is strictly ascending if greater arguments produce greater results: x0 < x1 implies f(x0) < f(x1). It is weakly ascending if x0 < x1 implies f(x0) ≤ f(x1). It is strictly or weakly descending if x0 < x1 implies f(x0) > f(x1) or f(x0) ≥ f(x1), respectively. A function is strictly monotonic if it is strictly ascending or strictly descending. It is weakly monotonic if it is weakly ascending or weakly descending.
When a function is said to be monotonic, the author means it is strictly monotonic or weakly monotonic, but it is not clear which without context or explicit statement. In the context of floating-point arithmetic, weakly monotonic is usually meant, as floating-point rounding commonly breaks strong monotonicity.
In this use, Wikipedia means that, when considered as a function of t
, v0 t * (v1 - v0)
is monotonic and (1 - t) * v0 t * v1
is not.
Why is one implementation monotonic while the other is not?
To see why v0 t * (v1-v0)
is monotonic, consider that v1-v0
is fixed as t
changes. then t * (v1-v0)
is t * c
for some constant c
. This is monotonic in t
due the nature of floating-point rounding: If t
increases, the real-number-arithmetic result of t * c
increases (for positive c
; there is a symmetric argument for negative c
), and the number it must round to either stays the same or increases. For example, if we were rounding to integers and considering 3, 3.1, 3.2, 3.3, 3.4, and so on, those would all round to 3. Then 3.5 rounds to 4 (using round-to-nearest, ties-to-even), 3.6 rounds to 4, and so on. The result of rounding always increases as its argument increases; it is monotonic. So floating-point multiplication is monotonic.
Similar, floating-point addition is monotonic; v0 d
always increases as d
increases. So v0 t * (v1-v0)
is monotonic.
In (1-t) * v0 t * v1
, 1-t
is monotonic, but it is descending. So now we are adding a descending function, (1-t) * v0
, to an ascending function, t * v1
. This opens the door for opportunities where the descending function jumps to a new value but the ascending function does not, or not by as much. With our three-digit format, an example occurs with v0
= 123, v1
= 223, and t
= .126 or .127:
t = .126 |
t = .127 |
|
---|---|---|
1-t |
.874 | .873 |
(1-t) * v0 |
107.502 → 108 | 107.379 → 107 |
t * v1 |
28.098 → 28.1 | 28.321 → 28.3 |
(1-t) * v0 t * v1 |
136.1 → 136 | 135.3 → 135 |
- Are there other common implementations of
lerp
?
I am unable to comment.
If the latter implementation suffers from floating point imprecision issues while not being really faster why does it even exist?
Both methods experience floating-point rounding issues. v0 t * (v1 - v0)
has the problem that it may not equal v1
when t
is 1, since a rounding error may occur in v1 - v0
, so that v0 (v1 - v0)
does not restore v1
. The other has the problem that it may not be monotonic, as shown above.