Is it possible to "look at" an OpenGL scene which was rendered at e.g. 60 degrees vertical FoV through a frustum/corridor/hole that has a smaller FoV - and have that fill the resulting projection plane?
I thinks that's not the same thing as "zooming in".
Background:
I'm struggling with the OpenGL transform pipeline for an optical see-through AR display here.
It could be I have my understanding of what the OpenGL transform pipeline of my setup really needs mixed up...
I'm creating graphics that are meant to appear properly located in the real world when being overlaid through AR glasses. The glasses are properly tracked in 3D space.
For rendering the graphics, I'm using OpenGL's legacy fixed function pipeline. Results are good but I keep struggling with registration errors that seem to have their root in my combination of glFrustum()
plus glLookAt()
not recreating the "perspective impression" correctly.
These AR displays usually don't fill the entire field of view of the wearer but the display area appears like a smaller "window" floating in space, usually ~3-6 feet in front of the user, pinned to head movement.
In OpenGL, I use a layout very similar to Bourke's where (I hope I summarize it correctly) the display's aspect ratio (e.g. 4:3) with windowwidth and windowheight defines the vertical Field of View. So FoV forms a fixed link with window dimensions and the "transform frustum" used by OpenGL - while I need to combine two frustums (?):
My understanding is that the OpenGL scene must be rendered with parameters equivalent to "parameters" of the human eye in order to match up - as the AR glasses allow the user to look through. Let's assume the focal length of the human eye is 22mm (Clark, R.N. Notes on the Resolution and Other Details of the Human Eye. 2007.) and the eyes' "sensor size" is 16mm w x 13mm h (my estimate). The calculated vertical FoV is ~33 degrees then - which we feed into the OpenGL pipeline.
The output of such a pipeline would be that I get either the application window filled with this "view" or I can get a scaled down version of if, depending on my glViewport
settings.
But as the AR glasses need input for only a sub-section, a "smaller window", of the whole field of view of the human wearer, I think I need a way to "look at" a smaller sub-area of the whole rendered scene - as if I was looking through a tiny hole onto the scene. These glasses, with their "display window", provide a vertical field of view of around under 20 degrees - but feeding that into the OpenGL pipeline would be wrong. So, how can I combine these conflicting FoVs? ...or am I on the wrong track here?