6

Given two segment endpoints A and B (in two dimensions), I would like to perform linear interpolation based on a value t, i.e.:

C = A + t(B-A)

In the ideal world, A, B and C should be collinear. However, we are operating with limited floating-point here, so there will be small deviations. To work around numerical issues with other operations I am using robust adaptive routines originally created by Jonathan Shewchuk. In particular, Shewchuk implements an orientation function orient2d that uses adaptive precision to exactly test the orientation of three points.

Here my question: is there a known procedure how the interpolation can be computed using the floating-point math, so that it lies exactly on the line between A and B? Here, I care less about the accuracy of the interpolation itself and more about the resulting collinearity. In another terms, its ok if C is shifted around a bit as long as collinearity is satisfied.

MrMobster
  • 1,851
  • 16
  • 25
  • 2
    if collinearity is really more important than accuray, let C= A. Otherwise, drop the idea. –  Oct 01 '16 at 14:29
  • I would assume that you need more precision for *C* to make this work. If the coordinates of *A* and *B* are native doubles, one can probably represent the coordinates of a suitable *C* as a pair of doubles per coordinate. But that would mean your space requirements will grow exponentially as you use such points as the input of another interpolation step. – MvG Oct 01 '16 at 20:01
  • 1
    why not compute `C=A+t.(B-A)` and then search region around `C` selecting the best `C` for which `dot(C-A,B-A)/(|C-A|.|B-A|)` is closet to one. You can also try `cross(C-A,B-A)` is minimal (area of triangle is minimal). For this computation you can use 2 doubles per value to enhance precision without the need of having those for all the points ... – Spektre Oct 02 '16 at 07:46
  • @Spektre, that was my idea, to use nextafter() to search around C until orient(A, C', B) ==0. But it will probably be very slow, plus I am sure that there are a number of surprise edge cases. I am afraid that Yves's post might be spot on. I was just wondering whether there are some papers that deal with this issue (because most I have read just dismiss such questions from the onset). – MrMobster Oct 02 '16 at 19:56
  • @MrMobster I don't think it would be too slow. You do not have to search a big area just a circle/square around C with size of few `ulp` of the coordinates. To boost precision you can also use relative coordinates so point `(0,0,0)` is `A,B` or `(A+B)/2` you would be surprised how much it can do see [ray and ellipsoid intersection accuracy improvement](http://stackoverflow.com/q/25470493/2521214) I do not see any edge cases at all but yes you can not expect full match only the best fit – Spektre Oct 03 '16 at 07:19
  • @MrMobster btw you can try to do a modified integer DDA on fixed point `floats` (without changing their exponent) to achieve the `ulp` precision. it uses just `+,-,<` but that will need `for` loop so it would be slower `O(|AC|)` – Spektre Oct 04 '16 at 07:15
  • @MrMobster did you manage to find a good solution to this problem? :) – AMA Sep 04 '19 at 22:38
  • @AMA Kind of. I decided to simply split AB into AC and BC, which approximate the original AB. The main issue with this is that it does change the angles a little bit and so if you have a lot of segments in a cramped area, the newly created segments may end up intersection their neighbours. I deal with this by imposing a minimal limit on angles of segments that share an endpoint. Works quite well and is still very fast. – MrMobster Sep 05 '19 at 09:19

1 Answers1

5

The bad news

The request can't be satisfied. There are values of A and B for which there is NO value of t other than 0 and 1 for which lerp(A, B, t) is a float.

A trivial example in single precision is x1 = 12345678.f and x2 = 12345679.f. Regardless of the values of y1 and y2, the required result must have an x component between 12345678.f and 12345679.f, and there's no single-precision float between these two.

The (sorta) good news

The exact interpolated value, however, can be represented as the sum of 5 floating-point values (vectors in the case of 2D): one for the formula's result, one for the error in each operation [1] and one for multiplying the error by t. I'm not sure if that will be useful to you. Here's a 1D C version of the algorithm in single precision that uses fused multiply-add to calculate the product error, for simplicity:

#include <math.h>

float exact_sum(float a, float b, float *err)
{
    float sum = a + b;
    float z = sum - a;
    *err = a - (sum - z) + (b - z);
    return sum;
}

float exact_mul(float a, float b, float *err)
{
    float prod = a * b;
    *err = fmaf(a, b, -prod);
    return prod;
}

float exact_lerp(float A, float B, float t,
                 float *err1, float *err2, float *err3, float *err4)
{
    float diff = exact_sum(B, -A, err1);
    float prod = exact_mul(diff, t, err2);
    *err1 = exact_mul(*err1, t, err4);
    return exact_sum(A, prod, err3);
}

In order for this algorithm to work, operations need to conform to IEEE-754 semantics in round-to-nearest mode. That's not guaranteed by the C standard, but the GNU gcc compiler can be instructed to do so, at least in processors supporting SSE2 [2][3].

It is guaranteed that the arithmetic addition of (result + err1 + err2 + err3 + err4) will be equal to the desired result; however, there is no guarantee that the floating-point addition of these quantities will be exact.

To use the above example, exact_lerp(12345678.f, 12345679.f, 0.300000011920928955078125f, &err1, &err2, &err3, &err4) returns a result of 12345678.f and err1, err2, err3 and err4 are 0.0f, 0.0f, 0.300000011920928955078125f and 0.0f respectively. Indeed, the correct result is 12345678.300000011920928955078125 which can't be represented as a single-precision float.

A more convoluted example: exact_lerp(0.23456789553165435791015625f, 7.345678806304931640625f, 0.300000011920928955078125f, &err1, &err2, &err3, &err4) returns 2.3679010868072509765625f and the errors are 6.7055225372314453125e-08f, 8.4771045294473879039287567138671875e-08f, 1.490116119384765625e-08f and 2.66453525910037569701671600341796875e-15f. These numbers add up to the exact result, which is 2.36790125353468550173374751466326415538787841796875 and can't be exactly stored in a single-precision float.

All numbers in the examples above are written using their exact values, rather than a number that approximates to them. For example, 0.3 can't be represented exactly as a single-precision float; the closest one has an exact value of 0.300000011920928955078125 which is the one I've used.

It might be possible that if you calculate err1 + err2 + err3 + err4 + result (in that order), you get an approximation that is considered collinear in your use case. Perhaps worth a try.

References

Pedro Gimeno
  • 2,837
  • 1
  • 25
  • 33