0

I am trying to implement the following transformation.

My original world-space coordinates are (2D) x=1586266800 and y=11812

I want:

  1. the bottom left corner of the OpenGL image to represent coordinates (1586266800, 11800)
  2. the top right corner of the OpenGL image to represent coordinates (1586267400, 11900)

In order to do that I plan to join three transformation matrices:

  1. Translate to the origin of coordinates x=1586266800 and y=11800
  2. Scale to have a width of 600 and a height of 100
  3. Translate again -1.0f and -1.0f so the center of the OpenGL is at the bottom left.

I use the following transformation matrices:

Translation Matrix:

| 1 0 0 tx |
| 0 1 0 ty |
| 0 0 1 tz |
| 0 0 0 1  |

Scale Matrix:

| sx  0  0  0 |
|  0 sy  0  0 |
|  0  0 sz  0 |
|  0  0  0  1 |

In Octave I can implement the transformation as follows, multiplying three matrices:

>> candle
candle =
  1586266800
       11812
           0
           1
>> translation1
translation1 =
    1   0   0   -1586266800
    0   1   0        -11800
    0   0   1             0
    0   0   0             1

>> scale
scale =
   0.00333333333333333      0   0   0
                     0   0.02   0   0
                     0      0   1   0
                     0      0   0   1

(where `0.0033333 = 2/600` and `0.02 = 2/100`)

>> translation2
translation2 =
    1   0   0   -1
    0   1   0   -1
    0   0   1    0
    0   0   0    1

>> translation2*scale*translation1*candle
ans =

                    -1
    -0.759999999999991
                     0
                     1

Which translates the point to the right place in a -1.0f,1.0f OpenGL screen.

Now I am trying to replicate that in my Geometry shader, which receives the original world-space coordinates from the vertex shader.

I tried this:

#version 330 core

layout (points) in;
layout (line_strip, max_vertices = 12) out;

in uint gs_in_y[];
in uint gs_in_x[];

uniform uint xOrigin;
uniform uint xScaleWidth;
uniform uint yOrigin;
uniform uint yScaleWidth;

void main()
{

    // TRANSLATION MATRIX
    // [ 1 0 0 tx ]
    // [ 0 1 0 ty ]
    // [ 0 0 1 tz ]
    // [ 0 0 0 1  ]
    // mat3 m = mat3(
    //  1.1, 2.1, 3.1, // first column (not row!)
    //  1.2, 2.2, 3.2, // second column
    //  1.3, 2.3, 3.3  // third column
    //  );
    mat4 translation = mat4(
        1.0f, 0, 0, -xOrigin,
        0, 1.0f, 0, -yOrigin,
        0, 0, 1.0f, 0,
        0, 0, 0, 1.0f
    );


    // SCALE MATRIX
    // [ sx  0  0  0 ]
    // [  0 sy  0  0 ]
    // [  0  0 sz  0 ]
    // [  0  0  0  1 ]
    mat4 scale = mat4(
        2.0/xScaleWidth, 0, 0, 0,
        0, 2.0f/yScaleWidth, 0, 0,
         0,  0, 1.0f,  0,
         0,  0,  0,  1.0f
    );

    // FINAL TRANSLATION
    mat4 translationGl = mat4(
        1.0f,      0,    0,   -1.0f,
           0,   1.0f,    0,   -1.0f,
           0,      0, 1.0f,   0,
           0,      0,    0,   1.0f
    );

    gl_Position = translationGl * scale * translation * vec4(gs_in_x[0], gs_in_y[0], 0.0, 1.0);
    EmitVertex();    
    gl_Position = translationGl * scale * translation * vec4(gs_in_x[0]+30, gs_in_y[0], 0.0, 1.0); 
    EmitVertex();    
    EndPrimitive();

}
Tasos Papastylianou
  • 21,371
  • 2
  • 28
  • 57
M.E.
  • 4,955
  • 4
  • 49
  • 128
  • Why do these calculations in the shader based purely on uniforms `xOrigin`, `yOrigin` etc.? Wouldn't it be easier to calculate the transforms on the cpu using [`glm`](https://glm.g-truc.net/0.9.9/index.html) and pass the resulting `mat4`'s as uniforms? – G.M. Apr 10 '20 at 10:53
  • 1
    I totally understand and agree, but as I have never passed a `mat4` as uniform I am doing it step by step using what I know it works. By replicating the same code I have in Octave (which works) I expect to understand how the transformation works. So far it seems I am not fully understanding how to apply this three transformations in GLSL. The next step would be to pass a single `mat4` with the transformation as uniform and use just one step in the shader as you say. – M.E. Apr 10 '20 at 10:58
  • @M.E. you might hitting precision problems as `1586266800` is quite big for a float... may be using `double dmat? dvec?`) will help. Btw this [GLSL debug prints](https://stackoverflow.com/a/44797902/2521214) might help to debug your stuff a lot more – Spektre Apr 10 '20 at 13:15
  • Thanks for the coment @Spektre, as far as I know maximum float value is 2,147,483,647 where I am dealing with value 1,586,266,800. So floats shall fit here. I will check your links as I definitively miss having some option to debug. – M.E. Apr 10 '20 at 13:31
  • @M.E. you are dismissing the relative difference in magnitudes .... you got `1586266800` as offset but you are dealing with smaller values like up to 600 so when using matrix transforms you will be adding those together and as float has only 23 bits of mantissa (and GPU version could have even less) and `1586266800/600` is pretty close to 2^23 you will loss a lot of information due rounding depending on the order of operations... it will lead to jitter of position and orientation of objects and view. – Spektre Apr 10 '20 at 13:38
  • @Spektre thanks for highlighting this and clarifying. I was totally unaware about this issue. I can easily reduce the magnitude by 60 (the number represents second epoch timestamps, and I need to deal with minute epoch timestamps, so dividing the data by 60 before moving it to the GPU is totally doable). Hence my number would be 26437780 instead of 1586266800. According to your experience, would that be enough or shall I reduce it further? As an additional measure I can also apply an offset to set zero to minutes from year 2000 which would further reduce the number. – M.E. Apr 10 '20 at 13:52
  • @M.E. that sounds more like it would be better not to use matrices for transforms ... The idea is to make operations in correct order. `+,-` is the most problematic operation so you should add/remove offset at start or at end (depending on what you do) and then you OK with floats even without reduction ... – Spektre Apr 10 '20 at 13:55
  • Good to know. Thanks – M.E. Apr 10 '20 at 13:57
  • “maximum float value is 2,147,483,647”. This is not true. The maximum float value is in the order of 10^38, but a float only has 7 decimal digits of precision. It can store integers up to 2^24= 16,777,216. – Cris Luengo Apr 10 '20 at 14:00
  • Thanks for pointing that out. I read about the float again in at https://www.khronos.org/opengl/wiki/OpenGL_Type and understood that GLfloat is an IEEE-754 floating-point value which has just 7 decimal digits of precision. It is even more clear now that the number needs to be subtracted the offset first (which lead to very small numbers). – M.E. Apr 10 '20 at 14:10
  • @CrisLuengo no one is doubting float can hold big numbers the problem is adding small and big numbers together which happens during transformations with big offsets. If the binary exponent difference is near or bigger than mantissa bits rounding occurs. We need to mind `|a/b| < 2^23` where `|a|<=|b|`. Otherwise special tweaks are needed like in here [Is it possible to make realistic n-body solar system simulation in matter of size and mass?](https://stackoverflow.com/a/28020934/2521214) or [ray and ellipsoid intersection accuracy improvement](https://stackoverflow.com/q/25470493/2521214) – Spektre Apr 10 '20 at 14:59
  • @Spektre: I'm not arguing that. I was just correcting OP, who thought that a 32-bit float can represent integers like a 32-bit signed integer can. – Cris Luengo Apr 10 '20 at 15:09
  • Just as reference in case it might help others, on top of Spektre and @CrisLuengo's comments, I was using uint which was messing up with all the operations (3329U - 3330U does not return -1). I have replaced everything with ints and I am performing the + and - operations directly on the vector definition and it seems I am headed in the right direction now. – M.E. Apr 10 '20 at 17:35

0 Answers0