I'm currently writing an android app where I need to cache video-frames so that I can easily go back and forth with little to no delay.
Right now I'm letting android decode the video frame by providing a Surface to the Configure call of the MediaCodec
object and calling releaseOutputBuffer
with the render flag set to true
.
The only way I found to access the decoded surface data (besides decoding the returned bytebuffer whose format appears to be device-dependent) is to call updateTeximage
on the SurfaceTexture
linked to the Surface, attaching this to the GL_TEXTURE_EXTERNAL_OES
target and rendering it to a GL_TEXTURE2D
target texture I created myself in order to cache it.
I would like to optimize this caching process and be able to decode the frames on a different thread. Using my current method, this means that I would have to create another EGL context for the video decoder, share the context etc...
My question is: Is it possible to access the EGL-image or native buffer data associated with the Surface without calling updateTexImage
?
That way I could cache the egl image (which does not require EGL context according to EGL_ANDROID_image_native_buffer
). This would also cache in YUV format which would be much more storage-efficient than the raw RGB textures I'm caching now.