It's been demonstrated how to feed MediaCodec with Surface input like the CameraPreview, but are there practical ways of buffering this input before submission to MediaCodec
?
In my experiments, a Galaxy Nexus experiences unacceptable hiccups in producing audio / video streams using the direct, synchronous encoding method in CameraToMpegTest.java
When using MediaCodec
with byte[]
or ByteBuffer
input, we can submit unencoded data to a ExecutorService
or similar queue for processing to ensure no frames are dropped, even if the device experiences spikes in CPU usage out of our application's control. However, due to the requirement of performing color format conversion between Android's Camera and MediaCodec this method is unrealistic for high resolution, live video.
Thoughts:
Is there a way to feed the
NativePixmapType
created withEGL14.eglCopyBuffers(EGLDisplay d, EGLSurface s, NativePixmapType p)
toMediaCodec
?Can anyone from Android comment on whether harmonizing ByteBuffer formats between the Camera and MediaCodec is on the roadmap?