I'm designing an application that has a OpenGL processing pipeline (collection of shaders) and simultaneously requires the end user to see the unprocessed camera preview.
For the sake of example, suppose you want to show the user the camera preview and at the same time count the number of red objects in the scenes you receive from the camera, but any shaders you utilize to count the objects such as hue filtering, etc. should not be seen by the user.
How would I go about setting this up properly?
I know I can setup a camera preview and then on the callback receive camera frame data in YUV format, then dump that into an OpenGL texture and process the frame that way, however, that has performance problems associated with it. I have to roundtrip the data from the camera hardware to the VM, then pass it back to the GPU memory. I'm using SurfaceTexture
to get the data from the camera directly in OpenGL understandable format and pass that to my shaders to solve this issue.
I thought I'd be able to show that same unprocessed SurfaceTexture
to the end user, but TextureView
does not have a constructor or a setter where I can pass it the SurfaceTexture
I want it to render. It always creates its own.
This is an overview of my current setup:
- GLRenderThread: this class extends from Thread, setups the OpenGL context, display, etc. and uses a SurfaceTexture as the surface (3rd parameter of eglCreateWindowSurface).
- GLFilterChain: A collection of shaders that perform detection on the input texture.
- Camera: Uses a separate SurfaceTexture which is used as the input of GLFilterChain and grabs the camera's preview
- Finally a TextureView that displays the GLRenderThread's SurfaceTexture
Obviously, with this setup, I'm showing the processed frames to the user which is not what I want. Further, the processing of the frames is not real-time. Basically, I run the input from Camera through the chain once and once all filters are done, I call updateTexImage to grab the next frame from the Camera. My processing is around 10 frames per second on Nexus 4.
I feel that I probably need to use 2 GL contexts, one for real-time preview and one for processing, but I'm not certain. I'm hoping someone can push me in the right direction.