I have a simple camera based 'modern' OpenGL 3D graphics display, for rendering relatively simple objects that are constructed from collections of specified points, lines and curves (e.g. a cube, cylinder, etc). Three different colored fixed length lines are drawn that intersect each other at the center of the world space, to visually represent the XYZ Cartesian axes. The user can, via mouse control, pan around the view, rotate the view AND zoom in/out (via mouse wheel movement).
If I try and do a large amount of zooming in on the origin of the Axes to a level that allows me to visually separate some very close together rendered points that lie near the origin (say, 0.000001 length units apart) I get problems with rendering accuracy:
(1) the three Axes lines start to fail to ALL intersect each other at same point (origin). Two of the Axes intersect and a third axis line crosses each of those two lines separately a small distance away from the origin. The amount of separation of the third axis varying slightly with viewing rotation.
AND
(2) Points that, for example, are intended to lie exactly on one of the Axes are no longer rendered as such, and instead appear to be located slightly off the axis line (again the amount of separation of the points from the axis varies a little with viewing rotation).
To increase accuracy I have changed from using default 'GLfloat' to 'GLdouble' and modified all model geometry related code such as specifying vertex positions, distances, etc. to be in double precision (i.e. use of 'dvec3' instead of default 'vec3', etc.). But this makes no difference. [NOTE: The only items that continue to use GLfloat instead of GLdouble are things like specifying RGB values for the colors of points or lines that are rendered]
How do I maintain accuracy of rendering with extreme zooming in to very small scales?