0

I have a simple camera based 'modern' OpenGL 3D graphics display, for rendering relatively simple objects that are constructed from collections of specified points, lines and curves (e.g. a cube, cylinder, etc). Three different colored fixed length lines are drawn that intersect each other at the center of the world space, to visually represent the XYZ Cartesian axes. The user can, via mouse control, pan around the view, rotate the view AND zoom in/out (via mouse wheel movement).

If I try and do a large amount of zooming in on the origin of the Axes to a level that allows me to visually separate some very close together rendered points that lie near the origin (say, 0.000001 length units apart) I get problems with rendering accuracy:

(1) the three Axes lines start to fail to ALL intersect each other at same point (origin). Two of the Axes intersect and a third axis line crosses each of those two lines separately a small distance away from the origin. The amount of separation of the third axis varying slightly with viewing rotation.

AND

(2) Points that, for example, are intended to lie exactly on one of the Axes are no longer rendered as such, and instead appear to be located slightly off the axis line (again the amount of separation of the points from the axis varies a little with viewing rotation).

enter image description here

To increase accuracy I have changed from using default 'GLfloat' to 'GLdouble' and modified all model geometry related code such as specifying vertex positions, distances, etc. to be in double precision (i.e. use of 'dvec3' instead of default 'vec3', etc.). But this makes no difference. [NOTE: The only items that continue to use GLfloat instead of GLdouble are things like specifying RGB values for the colors of points or lines that are rendered]

How do I maintain accuracy of rendering with extreme zooming in to very small scales?

DavidH
  • 97
  • 6
  • 1
    "The only items that continue to use GLfloat ..." also gl_Position -- you cannot set it to dvec4 from what I know, and that's the one that actually matters. – Yakov Galka May 26 '23 at 04:50

2 Answers2

1

You must be rendering your lines with just two points that are far from the origin, like (-1,0,0) and (1,0,0).

When those are projected onto the zoomed-in viewport (let's say at scale S), their floating point coordinates get very large (on the order of S). The rasterizer then needs to clip those when rendering them onto the screen, where the coordinates within the viewport are comparatively small (<1). This results in the loss of precision: since the endpoints of the lines are imprecise (rounding error of ε·S), the interpolated result within the viewport is going to be just as imprecise (i.e. up to ε·S off the theoretical value).

I expect the precision of the rasterizer to be about ε = 2−23 ≈ 0.0000001 if it uses 32-bit floating point numbers internally, which is in agreement to the scale that you start observing the effect. Note that it is not affected by the precision of the attributes, but is rather internal to the rasterizer, and could be very well hardwired in the circuitry.

The solution is actually rather simple. All you need to do is to split your lines so they go through the origin explicitly. I.e. render it in two segments: (-1,0,0) to (0,0,0), and (0,0,0) to (1,0,0). This way the origin will be projected to its exact location, and the rounding errors of the endpoint outside the viewport will have only a minimal influence on the rest of the drawn line.

Yakov Galka
  • 70,775
  • 16
  • 139
  • 220
  • Many thanks for this detailed advice and suggestion. Each of the X, Y and Z axis lines were indeed being rendered as single lines, spanning much of the display. I have now split each of the axes lines into two lines, with each one starting at the origin. I can now zoom in on the origin to the limit imposed and the the axes render perfectly joined AND the points also rendered sitting on the x-axis as they should appear. – DavidH May 27 '23 at 06:31
0

There is no way (that I know of) to directly pass 64bit doubles to interpolators as Vertex is truncating gl_Position to 32bit floats (or less).

Still there are ways to improve precision:

  1. separate values to more floats based on magnitude

    this can be done if you know the ranges inside see:

  2. emulate passing double with 3 floats

    so you disect doubles mantissa (1+52 bits) and store it into 3 floats (1+23 bits) on CPU side and pass that to Vertex shaders:

    double x;       //    64 bit input
    float x0,x1,x2; // 3x 32 bit output
    x0=float(x); x-=x0;
    x1=float(x); x-=x1;
    x2=float(x);
    

    then on Fragment side you reconstruct back:

    double x;       //    64 bit output
    float x0,x1,x2; // 3x 32 bit input
    x =x0;
    x+=x1;
    x+=x2;
    
  3. use relative coordinates

    so you know your view is centered around some point p0 so you just substract that value from all coordinates (and camera position) before you apply zoom (or render). This will significantly lover mantissa bits needed and overcome precision problems (up to a degree)

    this helped me for example with this high zoom problem:

    Also this way does not require shaders (in case youre stuck with old GL)

Note that both #1,#2 and using inbuild geometry rendering never really helped me (probably shader compiler's doing on my long outdated gfx cards) in such case you would need to bypass primitive rasterizers completely and render them on your own like this:

so render BBOX QUAD of your primitive and inside shader use SDF to discard; points outside. Simple primitives like lines triangles etc can be decided in O(1) with simple equation the above link is not the case however (that is why its so complex).

Spektre
  • 49,595
  • 11
  • 110
  • 380