5

I have a large world, about 5,000,000 x 1,000,000 units. The camera can be near some object or far enough as to see the whole world.
I get the mouse position in world coordinates by unprojecting (Z comes from depth buffer). The problem is that it involves a matrix inverse. When using big and small numbers (e.g. translating away from origin and scaling to see more world) at the same time, the calculations become unstable.

Trying to see the accuracy of this inverse matrix I look at the determinant. Ideally it never will be zero, due to the nature of transformation matrices. I know that being 'det' a small value means nothing on its own, it can be due to small values in the matrix. But it can also be a sign of numbers becoming wrong.

I also know I can calculate the inverse by inverting each transformation and multiplicaying them. Does it provide more accuracy?

How can I tell if my matrix is getting degenerated, suffer numerical issues?

Ripi2
  • 7,031
  • 1
  • 17
  • 33
  • How do you set the far and near clipping planes? – Malcolm McLean Feb 17 '17 at 00:13
  • @Malcom `near = distance(camera, centerOfWorld) - radusOfWorld` and `far = distance(camera, centerOfWorld) + radusOfWorld` both when being outside of rounding box. When inside, `near=nearMin` (say =1 unit, to see detail) and `far= 2*radiusOfWorld` I don't bother with Z-fighting in this case. – Ripi2 Feb 17 '17 at 00:30
  • http://wiki.unity3d.com/index.php?title=Floating_Origin – genpfault Feb 17 '17 at 14:02
  • You can look at the condition number, which is the ratio of the max to min eigenvalue for the matrix. Matricies with high condition numbers will perform poorly. https://en.wikipedia.org/wiki/Condition_number – duffymo Feb 17 '17 at 15:46
  • @duffymo I like the idea of derivatives... Will play with it. – Ripi2 Feb 17 '17 at 16:44
  • Derivatives? No, eigenvalues. Use an iterative method that only calculates one at a time, not all. Get only the min and max. Why waste cycles calculating eigenvalues that you won't use? Even better - find a library that exposes a method to calculate condition number for you. Next step: What will you do with the new information? – duffymo Feb 17 '17 at 16:47
  • http://math.stackexchange.com/questions/271864/power-iteration-smallest-eigenvalue – duffymo Feb 17 '17 at 16:53
  • @duffymo In the link they speak also about derivatives as a way to tell how much change is expected in the result for a small change in input-data. So, a small variation in NDC coordinates, how is reflected in world space? – Ripi2 Feb 17 '17 at 16:53
  • Yes, I know. I don't know how to quantify the effect on your coordinates. Like I said, what will you do with that new information? – duffymo Feb 17 '17 at 16:54
  • @duffymo: I' don't know. I suppose I would compare those eigenvalues with normal, well-conditioned ones achieved from "good" previuos experiences. – Ripi2 Feb 17 '17 at 16:57

2 Answers2

7

for starters see Understanding 4x4 homogenous transform matrices

  1. Improving accuracy for cumulative matrices (Normalization)

    To avoid degeneration of transform matrix select one axis as main. I usually chose Z as it is usually view or forward direction in my apps. Then exploit cross product to recompute/normalize the rest of axises (which should be perpendicular to each other and unless scale is used then also unit size). This can be done only for orthonormal matrices so no skew or projections ... Orthogonal matrices must be scaled to orthonormal then inverted and then scaled back to make this usable.

    You do not need to do this after every operation just make a counter of operations done on each matrix and if some threshold crossed then normalize it and reset counter.

    To detect degeneration of such matrices you can test for orthogonality by dot product between any two axises (should be zero or very near it). For orthonormal matrices you can test also for unit size of axis direction vectors ...

    Here is how my transform matrix normalization looks like (for orthonormal matrices) in C++:

    double reper::rep[16]; // this is my transform matrix stored as member in `reper` class
    //---------------------------------------------------------------------------
    void reper::orto(int test) // test is for overiding operation counter
    {
        double   x[3],y[3],z[3]; // space for axis direction vectors
        if ((cnt>=_reper_max_cnt)||(test)) // if operations count reached or overide
        {
            axisx_get(x);      // obtain axis direction vectors from matrix
            axisy_get(y);
            axisz_get(z);
            vector_one(z,z);   // Z = Z / |z|
            vector_mul(x,y,z); // X = Y x Z  ... perpendicular to y,z
            vector_one(x,x);   // X = X / |X|
            vector_mul(y,z,x); // Y = Z x X  ... perpendicular to z,x
            vector_one(y,y);   // Y = Y / |Y|
            axisx_set(x);      // copy new axis vectors into matrix
            axisy_set(y);
            axisz_set(z);
            cnt=0;             // reset operation counter
        }
    }
    
    //---------------------------------------------------------------------------
    void reper::axisx_get(double *p)
    {
        p[0]=rep[0];
        p[1]=rep[1];
        p[2]=rep[2];
    }
    //---------------------------------------------------------------------------
    void reper::axisx_set(double *p)
    {
        rep[0]=p[0];
        rep[1]=p[1];
        rep[2]=p[2];
        cnt=_reper_max_cnt; // pend normalize in next operation that needs it
    }
    //---------------------------------------------------------------------------
    void reper::axisy_get(double *p)
    {
        p[0]=rep[4];
        p[1]=rep[5];
        p[2]=rep[6];
    }
    //---------------------------------------------------------------------------
    void reper::axisy_set(double *p)
    {
        rep[4]=p[0];
        rep[5]=p[1];
        rep[6]=p[2];
        cnt=_reper_max_cnt; // pend normalize in next operation that needs it
    }
    //---------------------------------------------------------------------------
    void reper::axisz_get(double *p)
    {
        p[0]=rep[ 8];
        p[1]=rep[ 9];
        p[2]=rep[10];
    }
    //---------------------------------------------------------------------------
    void reper::axisz_set(double *p)
    {
        rep[ 8]=p[0];
        rep[ 9]=p[1];
        rep[10]=p[2];
        cnt=_reper_max_cnt; // pend normalize in next operation that needs it
    }
    //---------------------------------------------------------------------------
    

    The vector operations looks like this:

    void  vector_one(double *c,double *a)
    {
        double l=divide(1.0,sqrt((a[0]*a[0])+(a[1]*a[1])+(a[2]*a[2])));
        c[0]=a[0]*l;
        c[1]=a[1]*l;
        c[2]=a[2]*l;
    }
    
    void  vector_mul(double *c,double *a,double *b)
    {
        double   q[3];
        q[0]=(a[1]*b[2])-(a[2]*b[1]);
        q[1]=(a[2]*b[0])-(a[0]*b[2]);
        q[2]=(a[0]*b[1])-(a[1]*b[0]);
        for(int i=0;i<3;i++) c[i]=q[i];
    }
    
  2. Improving accuracy for non cumulative matrices

    Your only choice is use at least double accuracy of your matrices. Safest is to use GLM or your own matrix math based at least on double data type (like my reper class).

    Cheap alternative is using double precision functions like

    glTranslated
    glRotated
    glScaled
    ...
    

    which in some cases helps but is not safe as OpenGL implementation can truncate it to float. Also there are no 64 bit HW interpolators yet so all iterated results between pipeline stages are truncated to floats.

    Sometimes relative reference frame helps (so keep operations on similar magnitude values) for example see:

    ray and ellipsoid intersection accuracy improvement

    Also In case you are using own matrix math functions you have to consider also the order of operations so you always lose smallest amount of accuracy possible.

  3. Pseudo inverse matrix

    In some cases you can avoid computing of inverse matrix by determinants or Horner scheme or Gauss elimination method because in some cases you can exploit the fact that Transpose of orthonormal rotational matrix is also its inverse. Here is how it is done:

    void matrix_inv(GLfloat *a,GLfloat *b) // a[16] = Inverse(b[16])
    {
        GLfloat x,y,z;
        // transpose of rotation matrix
        a[ 0]=b[ 0];
        a[ 5]=b[ 5];
        a[10]=b[10];
        x=b[1]; a[1]=b[4]; a[4]=x;
        x=b[2]; a[2]=b[8]; a[8]=x;
        x=b[6]; a[6]=b[9]; a[9]=x;
        // copy projection part
        a[ 3]=b[ 3];
        a[ 7]=b[ 7];
        a[11]=b[11];
        a[15]=b[15];
        // convert origin: new_pos = - new_rotation_matrix * old_pos
        x=(a[ 0]*b[12])+(a[ 4]*b[13])+(a[ 8]*b[14]);
        y=(a[ 1]*b[12])+(a[ 5]*b[13])+(a[ 9]*b[14]);
        z=(a[ 2]*b[12])+(a[ 6]*b[13])+(a[10]*b[14]);
        a[12]=-x;
        a[13]=-y;
        a[14]=-z;
    }
    

    So rotational part of the matrix is transposed, projection stays as was and origin position is recomputed so A*inverse(A)=unit_matrix This function is written so it can be used as in-place so calling

    GLfloat a[16]={values,...}
    matrix_inv(a,a);
    

    lead to valid results too. This way of computing Inverse is quicker and numerically safer as it pends much less operations (no recursion or reductions no divisions). Of coarse this works only for orthonormal homogenuous 4x4 matrices !!!*

  4. Detection of wrong inverse

    So if you got matrix A and its inverse B then:

    A*B = C = ~unit_matrix
    

    So multiply both matrices and check for unit matrix...

    • abs sum of all non diagonal elements of C should be close to 0.0
    • all diagonal elements of C should be close to +1.0
genpfault
  • 51,148
  • 11
  • 85
  • 139
Spektre
  • 49,595
  • 11
  • 110
  • 380
  • Your answer is very good, and complete. But it doesn't solve my issue because I do build matrices from basis (camera, scale, etc). So degeneration is not due to stacking but because of big translations with small scale. Perhaps "unstable matrix" expresses better my concern than "degenerate". – Ripi2 Feb 17 '17 at 11:47
  • @Ripi2 have reedited my answer and added the stuff from comments in more comprehensible way – Spektre Feb 17 '17 at 14:07
0

After some experiments I see that (speaking of transformations, not any matrix) the diagonal (i.e. scaling factors) of the matrix (m, before inverting) is the main responsible for determinant value.

So I compare the product p= m[0] · m[5] · m[10] · m[15] (if all of them are != 0) with the determinant. If they are similar 0.1 < p/det < 10 I can "trust" somehow in the inverse matrix. Otherwise I have numerical issues that advise to change strategy for rendering.

Ripi2
  • 7,031
  • 1
  • 17
  • 33
  • 2
    You know if you multiply both direct and inverse matrix you should obtain unit matrix so you can check for that. But if you use skew or any non linear transform or projections then your matrix can become non-invertable – Spektre Feb 17 '17 at 12:41