I've been porting over the following Test Android example to run in a simple Xamarin Android project.
https://bigflake.com/mediacodec/ExtractMpegFramesTest_egl14.java.txt
I'm running a video captured by the camera (on the same device) through this pipeline but the PNGs I'm getting out the other end are distorted, I assume due to the minefield of Android Camera color spaces.
Here are the images I'm getting running a Camera Video through the pipeline...
https://i.stack.imgur.com/Ufwtd.jpg
Its hard to tell, but it 'kinda' looks like it is a single line of the actual image, stretched across. But I honestly wouldn't want to bank on that being the issue as it could be a red herring.
However, when I run a 'normal' video that I grabbed online through the same pipeline, it works completely fine.
I used the first video found on here (the lego one) http://techslides.com/sample-webm-ogg-and-mp4-video-files-for-html5
And I get frames like this...
https://i.stack.imgur.com/YxkzK.jpg
Checking out some of the ffmpeg probe data of the video, both this and my camera video have the same pixel format (pix_fmt=yuv420p) but there are differences in color_range.
The video that works has,
color_range=tv
color_space=bt709
color_transfer=bt709
color_primaries=bt709
And the camera video just has...
color_range=unknown
color_space=unknown
color_transfer=unknown
color_primaries=unknown
The media format of the camera video appears to be in SemiPlanar YUV, the codec output gets updated to that at least. I get an OutputBuffersChanged message which sets the output buffer of the MediaCodec to the following,
{
mime=video/raw,
crop-top=0,
crop-right=639,
slice-height=480,
color-format=21,
height=480,
width=640,
what=1869968451,
crop-bottom=479,
crop-left=0,
stride=640
}
I can also point the codec output to a TextureView as opposed to OpenGL surface, and just grab the Bitmap that way (obviously slower) and these frames look fine. So maybe its the OpenGL display of the raw codec output? Does Android TextureView do its on decoding?
Note - The reason I'm looking into all this is I have a need to try and run some form of image processing on a raw camera feed at as close to 30fps as possible. Obviously, this is not possible some devices, but recording a video at 30fps and then processing the video after the fact is a possible workaround I'm investigating. I'd rather try and process the image in OpenGL for the improved speed than taking each frame as a Bitmap from the TextureView output.
In researching this I've seen someone else with pretty much the exact same issue here How to properly save frames from mp4 as png files using ExtractMpegFrames.java? although he didn't seem to have much luck finding out what might be going wrong.
EDIT - FFMpeg Probe outputs for both videos...
Video that works - https://justpaste.it/484ec .
Video that fails - https://justpaste.it/55in0 .