6

I need to decode a video into a sequence of bitmaps, such that I am able to modify them, and then compress them back to a video file in android.

I plan to manage this by using getFrameAtTime and saving it to an image sequence. Then I can modify images in the sequence and code it back to a movie. But I have two problem with this:

  • First, as I read it, the getFrameAtTime is for creating thumbnails and will not guarantee returning the correct frame. This makes the video laggy.
  • Secondly, saving the images and reading it back takes a long time.

I read that the proper way of doing the decode is with MediaExtractor, this is fine, but I only have examples to render it directly to a surfaceView. Is there any way for me to convert the outputBuffer to a bitmap?

I would need it to get it working with an api level of 16 and above.

Nathaniel Ford
  • 20,545
  • 20
  • 91
  • 102
G.T.
  • 562
  • 8
  • 18

2 Answers2

9

You can find a collection of useful examples on the bigflake site.

In particular, the ExtractMpegFramesTest demonstrates how to decode a .mp4 file to Bitmap, and the DecodeEditEncodeTest decodes and re-encodes an H.264 stream, modifying the frames with a GLES shader.

Many of the examples use features introduced in API 18, such as Surface input to MediaCodec (which avoids a number of color-format issues), and MediaMuxer (which allows you to convert the raw H.264 elementary stream coming out of MediaCodec into a .mp4 file). Some devices will allow you to extract video to YUV data in ByteBuffer, modify it, and re-encode it, but other devices extract to proprietary YUV color formats that may be rejected by the API 16 version of MediaCodec.

I'd recommend coding for API 18 (Android 4.3 "Jellybean" MR2) or later.

fadden
  • 51,356
  • 5
  • 116
  • 166
  • Well the problem is that I already saw this site, but unfortunately I must do it for API lvl 16. And even though it says ExtractMpegFramesTest.java requires API 16, when I try to build it it requires at least 17... So it's not an option. – G.T. Dec 10 '13 at 09:00
  • You've got an uphill battle then. Does it need to work generally or is this just being built for a specific device? (I'll check into ExtractMpegFramesTest -- I didn't think it needed anything from 17+.) – fadden Dec 10 '13 at 15:59
  • Well that's the problem it needs to be generic and api lvl 16+... I got the getFrameAtTime solution to work without saving to png and speeding it up like this quite a few times, but still I may have the issue it will extract some frames two times, which would make the video laggy, and I still have no Idea how could I add audio to it. So any suggestion for splitting up the video to bitmaps and creating a new video file with audio would be appreciated, but it needs to work with 16+ – G.T. Dec 11 '13 at 08:04
  • The ExtractMpegFramesTest example has been modified to work with API 16 (needed EGL 1.0 rather than EGL 1.4 -- cf. http://stackoverflow.com/questions/20495863/mediacodec-extractmpegframestest-example-mismatch/ ). There is no facility for creating a .mp4 file (either video only or video+audio) until API 18 introduced the `MediaMuxer` class, so if you need that you may want to investigate external libraries (perhaps ffmpeg). – fadden Dec 11 '13 at 15:33
  • @fadden hi, I'm trying to get the ExtractMpegFramesTest example to work, but the SurfaceTexture onFrameAvailable() call is never made, or made too late. I added a question here - http://stackoverflow.com/questions/22457623/surfacetextures-onframeavailable-method-always-called-too-late – manixrock Mar 17 '14 at 16:00
  • In the end I made it with ffmpeg, works like a charm. – G.T. Feb 02 '15 at 14:05
1

There are many people saying that the method onFrameAvailable() is never called. Well, the listener should be in a different thread than the main thread. To set the listener do this: (where this is the class listener that implements SurfaceTexture.IOnFrameAvailableListener):

mSurfaceTexture.SetOnFrameAvailableListener(this);
Gabriel Bursztyn
  • 656
  • 8
  • 24