6

In Floating point linear interpolation one user proposed this implementation of lerp:

float lerp(float a, float b, float f) 
{
    return (a * (1.0 - f)) + (b * f);
}

While another user proposed this implementation of a lerp:

float lerp(float a, float b, float f)
{
    return a + f * (b - a);
}

Apparently the latter implementation is inferior due to floating point precision loss.

I then looked on Wikipedia and it said for the former implementation:

// Precise method, which guarantees v = v1 when t = 1. This method is monotonic only when v0 * v1 < 0.
// Lerping between same values might not produce the same value

and for the latter:

// Imprecise method, which does not guarantee v = v1 when t = 1, due to floating-point arithmetic error.
// This method is monotonic. This form may be used when the hardware has a native fused multiply-add instruction.

This gives me a few questions:

  1. Wikipedia states that "Lerping between same values might not produce the same value" I thought that except for floating point accuracy the functions are identical. Isn't

    (a * (1.0 - f)) + (b * f)=a + f * (b - a)
    

    a mathematical identity?

    If not what would be values that would give different results in both functions?

  2. What does Wikipedia mean by monotonic? Why is one implementation monotonic while the other is not?

  3. Are there other common implementations of lerp?

Edit: If the latter implementation suffers from floating point imprecision issues while not being really faster why does it even exist?

tempdev nova
  • 211
  • 1
  • 8
  • Even if both expressions are mathematically equivalent, they are not for floating point math. Every operation can result in rounding errors (cf https://stackoverflow.com/questions/588004/is-floating-point-math-broken). Only some operations are always exact, e.g. `x*1.0 == x`, `x+0.0==x` , and for finite `x`: `x*0.0 == 0.0` and `x-x == 0.0` (I'm ignoring the difference between `+0.0` and `-0.0`). – chtz Jun 18 '22 at 09:35
  • 2
    Yes, there are other common lerping variants. See also this [question on accurate linear interpolation](https://math.stackexchange.com/questions/907327/accurate-floating-point-linear-interpolation) on companion site Mathematics Stack Exchange. For a definition of monotonicity see, for example, [Math World](https://mathworld.wolfram.com/MonotonicFunction.html) – njuffa Jun 18 '22 at 09:35
  • My first question was are they an identity? because Wikipedia makes it sound likt it isn't. – tempdev nova Jun 18 '22 at 09:36
  • Floating point representation becomes thicker as value increases, so when you subtract values sufficiently high and close to each other, the error on the result can be surprisingly high – MatG Jun 18 '22 at 09:44
  • Ok I think I made the incorrect assumptions that floating point arithmetic and math are always interchangeble. I always thought that floating point inaccuracy was just a small thing to look out for with no real impacts but it seems I was massively wrong. – tempdev nova Jun 18 '22 at 10:15

1 Answers1

6
  1. Wikipedia states that "Lerping between same values might not produce the same value" I thought that except for floating point accuracy the functions are identical. Isn't

    (a * (1.0 - f)) + (b * f)=a + f * (b - a)
    

    a mathematical identity?

If not what would be values that would give different results in both functions?

a • (1.0−f) + (bf) = a + f • (ba) is a mathematical identity, but the result of computing a*(1.0-f) + b*f is not always equal to the result of computing a + f*(b-a).

For example, consider a floating-point format with a decimal base and three digits in the significand. Let a be 123, b be 223, and f be .124.

Then 1.0-f is .876. Then a * .876 would have real-number-arithmetic result 107.748, but 108 is produced since the result must be rounded to three significand digits. For b * f, we would have 27.652, but 27.7 is produced. Then 108 + 27.7 would produce 135.7 but produces 136.

On the other side, b-a produces 100. Then f*100 produces 12.4. Then a + 12.4 would produce 135.4 but produces 135.

So the results of the computations of the left and right sides, 136 and 135, are not equal.

  1. What does Wikipedia mean by monotonic?

A function f(x) is strictly ascending if greater arguments produce greater results: x0 < x1 implies f(x0) < f(x1). It is weakly ascending if x0 < x1 implies f(x0) ≤ f(x1). It is strictly or weakly descending if x0 < x1 implies f(x0) > f(x1) or f(x0) ≥ f(x1), respectively. A function is strictly monotonic if it is strictly ascending or strictly descending. It is weakly monotonic if it is weakly ascending or weakly descending.

When a function is said to be monotonic, the author means it is strictly monotonic or weakly monotonic, but it is not clear which without context or explicit statement. In the context of floating-point arithmetic, weakly monotonic is usually meant, as floating-point rounding commonly breaks strong monotonicity.

In this use, Wikipedia means that, when considered as a function of t, v0 + t * (v1 - v0) is monotonic and (1 - t) * v0 + t * v1 is not.

Why is one implementation monotonic while the other is not?

To see why v0 + t * (v1-v0) is monotonic, consider that v1-v0 is fixed as t changes. then t * (v1-v0) is t * c for some constant c. This is monotonic in t due the nature of floating-point rounding: If t increases, the real-number-arithmetic result of t * c increases (for positive c; there is a symmetric argument for negative c), and the number it must round to either stays the same or increases. For example, if we were rounding to integers and considering 3, 3.1, 3.2, 3.3, 3.4, and so on, those would all round to 3. Then 3.5 rounds to 4 (using round-to-nearest, ties-to-even), 3.6 rounds to 4, and so on. The result of rounding always increases as its argument increases; it is monotonic. So floating-point multiplication is monotonic.

Similar, floating-point addition is monotonic; v0 + d always increases as d increases. So v0 + t * (v1-v0) is monotonic.

In (1-t) * v0 + t * v1, 1-t is monotonic, but it is descending. So now we are adding a descending function, (1-t) * v0, to an ascending function, t * v1. This opens the door for opportunities where the descending function jumps to a new value but the ascending function does not, or not by as much. With our three-digit format, an example occurs with v0 = 123, v1 = 223, and t = .126 or .127:

t = .126 t = .127
1-t .874 .873
(1-t) * v0 107.502 → 108 107.379 → 107
t * v1 28.098 → 28.1 28.321 → 28.3
(1-t) * v0 + t * v1 136.1 → 136 135.3 → 135
  1. Are there other common implementations of lerp?

As noted by njuffa in a comment, some implementations may use a fused-multiply add, which computes ab+c with only one rounding error, in contrast to a rounding error for the multiply and another for the add. Such an operation is defined in the standard C library routine fma, although it may be slow on platforms without hardware support for it. So the linear interpolation may be computed as fma(t, v1, fma(-t, v0, v0)), which nominally computes t*v1 + (-t*v0 + v0).

Algebraically, that would be equivalent to t*v1 + (1-t)*v0, but I do not have at hand any comments about the mathematical properties resulting from this fma computation.

If the latter implementation suffers from floating point imprecision issues while not being really faster why does it even exist?

Both methods experience floating-point rounding issues. v0 + t * (v1 - v0) has the problem that it may not equal v1 when t is 1, since a rounding error may occur in v1 - v0, so that v0 + (v1 - v0) does not restore v1. The other has the problem that it may not be monotonic, as shown above.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • By observation (not mathematical proof), the simple `fma()`-based lerp delivers results that are faithfully rounded, hit either endpoint for t in {0,1}, but are *not* monotonic. The performance of this variant is a major benefit on platforms with hardware support for FMA. – njuffa Jun 23 '22 at 18:19