1

I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.

My questions:

  1. Can I read video with MediaExtractor and then pass data to MediaMuxer to save video in another file? Without using MediaCodec?
  2. If I want to modify frames before saving, can I do that without using Surface? Just by modifying ByteBuffer? I assume that I need to decode data from MediaExtractor, then modify content, then encode it to MediaMuxer.
  3. Does sample is the same as frame in context of method MediaExtractor::readSampleData ?
  4. Do I need to decode sample?
404pio
  • 1,080
  • 1
  • 12
  • 32

1 Answers1

5

This is a brief description of what each class does:

  • MediaExtrator: Extracts encoded video/audio data
  • MediaCodec: Depending on how its configured it can be a decoder or an encoder.
  • MediaMuxer: Muxes streams of data into an output file.

This is how you pipeline should generally look like:

MediaExtractor -> MediaCodec(As Decoder) -> Your editing -> MediaCodec(As Encoder) -> MediaMuxer

To answer you questions:

  1. MediaExtractor will give you encoded data, if you want to do anything with it you will have to decode it using a MediaCodec.
  2. It might be possible to do so without a surface but it will be pretty limited. Surfaces is the way to go. You can find more info here: Editing frames and encoding with MediaCodec
  3. Sample can be a video frame or an audio sample
  4. Yes you do need to decode samples to edit them
l-l
  • 3,804
  • 6
  • 36
  • 42