0

Background

I have a program for developing car control systems that uses a combination of TCL scripting language and OpenGL to additionally render the behaviour of the car driving on a road.

What I'm doing is implementing OculusRift support for it and so far have managed to solve the sesor part and by right-clicking the window where the car is driving I can send it to the oculus rift. So far it looks like this: Youtube-Video

The visual portion of the program has plenty of features already and includes options like adjusting FOV, fisheye lenses, camera orientation, inside car view, etc.

After looking around the software folder I found a TCL file with the following lines:

// ... cut here

gl pushmatrix
gl loadidentity
gl translate $x $y $z
gl rotate $xrot 1 0 0
gl rotate $yrot 0 1 0
gl rotate $zrot 0 0 1
gl translate $offx $offy $offz
lassign [lrange [OGL get modelviewmatrix] 12 14] tx ty tz
gl popmatrix


dict set ::View($view) CameraMode FixedToBodyA
dict set ::View($view) xrot $xrot
dict set ::View($view) yrot $yrot
dict set ::View($view) zrot [expr $zrot + $zrot_eyeAngle]
dict set ::View($view) dist $conf(dist)
dict set ::View($view) VPtOffset_x $tx
dict set ::View($view) VPtOffset_y $ty
dict set ::View($view) VPtOffset_z $tz
dict set ::View($view) FieldOfView $conf(fov)
dict set ::View($view) AspectRatio $conf(AspectRatio)

return;
}

I did not find any line that handles the rendering itself but "gl" -commands were enough to make me understand that this TCL file is being run directly attached to the rendering process. So I expanded the above with this:

// load a self written DLL that wraps OVR functionality (Github link bellow)

# now run tclovrGetData() from the loaded DLL to get sensor data
set data [tclovrGetData] 
// data = x y z xrot yrot zrot

gl pushmatrix
gl loadidentity
gl translate $x $y $z
gl rotate $xrot 1 0 0
...

Github link to the DLL

As you can see on the video, it looks good on the monitor but as we know the lenses of the Rift bring in distortion (which was just slight for me) and quite some chromatic aberration.

My Idea

What I want is grab that "image/texture/frame" on the screen in the video each 10-20ms (I'm still new to the terminologies) and do some color-filtering on it(I want to see if I can reduce chromatic aberration). Then perhaps (to make everthing independent) create a new window with the modified image and send that one to the Rift. Basically get the image in the required "format" to be able to perform computation on it.

My idea was (since we are attached to the rendering process) add some extra lines to the above TCL file that call additional functions inside my DLL that could do the following:

// Begin Oculus rendering loop
//1. get frame that is currently being rendered (in this process)
//2. copy this frame to memory
//3. perform some operations (color filtering) on this frame in memory
//4. send this frame to a new window

Now my question:

  • Would something like this be possible?
  • Could someone point me into the right direction? I.e. some gl functions I could use?
  • Maybe Direct3D works aswell?

I did read This post on stackoverflow but failed to understand :/

Community
  • 1
  • 1
Skavee
  • 17
  • 2
  • Shouldn't the display hardware handle that sort of thing for you? Having to do in software the sorts of things that hardware ought to handle is never going to be efficient or sensible… – Donal Fellows Nov 05 '14 at 16:33
  • Yes it should. Well the OculusRift is still in devlopment state. I also don't know if when rendering using the Oculus software some sort of post-processing to minimize this is done. But atleast I don't see chromatic aberration correction comming from the lenses. My first plan actually was to use the Oculus rendering functions. But I really got overwhelmed, specially because I got limited access to the simulator software and therefore don't exactly know how to get the texture/frame infos to reroute them to the Oculus rendering engine. – Skavee Nov 06 '14 at 10:27

1 Answers1

0

Would something like this be possible?

Yes it would. But it might not be the best solution since copying the data back to cpu memory and processing it there will most probably not be fast enough for a realtime application.

Could someone point me into the right direction? I.e. some gl functions I could use?

If you really want to do this, you first have to render your image to a framebuffer (or better to say to a texture attached to a framebuffer). This texture can then be copied to cpu memory by the functions you already found in the related post (glGetTexImage). There are also other options like pixel buffers which might be a bit faster (see here)

The faster solution

All of these approaches require you to copy data from gpu mem to cpu mem, process it there and copy everything back. This requires a lot of synchronization between gpu and cpu and in addition processing the image on the cpu will be slow.

Depending on the available OpenGL features you can use there are methods to perform the whole image processing on the gpu. The easiest one might be to use shader:

  1. Render the scene to a framebuffer
  2. Draw a textured quad where the texture from (1), in the fragment shader do all the operations necessary.

In this method there is also no need for a second window since the post-processed quad can be display in the first window. For more about post-processing on the gpu you might have a look here.

BDL
  • 21,052
  • 22
  • 49
  • 55
  • Thanks for the reply. If I did read [this](http://en.wikibooks.org/wiki/OpenGL_Programming/Post-Processing#Drawing) correctly, I basically have to bind the destination FB to our own FB, do the processing and then bind our FB back to the screen buffer (0). One thing is still a bit unclear. I assume that there can only be one destination FB per process and therefore `glBindFramebuffer(GL_FRAMEBUFFER, fbo);` will bind the correct image (in a window) to our FB assuming I'm doing that call from within the same process. – Skavee Nov 05 '14 at 13:52
  • An application can only have one backbuffer, but multiple framebuffer objects. – BDL Nov 05 '14 at 17:48
  • Okay I think I'm starting to understand. Basically if I do the following: `glBindFramebuffer(GL_FRAMEBUFFER, fbo);` `// now do operations on fbo` `glBindFramebuffer(GL_FRAMEBUFFER, 0);` I'd be effectively selecting `fbo` as the target to render to, then I'd process what is inside `fbo`,and once I break the bound in the 3rd line. What is in `fbo` will be rendered to the screen? – Skavee Nov 05 '14 at 19:16
  • Yes. FBOs allow you to render to textures or to renderbuffers. When you bind 0 to GL_FRAMEBUFFER, the window's backbuffer is bound. – BDL Nov 05 '14 at 19:37