2

I'm doing some video processing on android using ndk.

What I'm doing now is decode the video using ffmpeg, grab the frame, convert it to opencv mat, then overlay some images on it, then convert it back to avframe, encode it back to a video.

But it's very slow! It takes about 50 seconds to process a 480*480 video with 15 fps.

I tried to do this using opengl es, reading the result image using glReadPixels, but from what I understand, glReadPixels doesn't really work on some devices. So I had to give it up.

As I understand, I have some very expensive operations in my current workflow,

  1. covert a AVFrame from yuv to bgr color space, then convert to opencv mat
  2. overlay a mat on another mat
  3. covert a opencv mat to AVFrame, then convert the frame from bgr to yuv space, then encode it into a video.

So, are there ways to improve my current workflow?

I'm adding multithread feature, but only devices with multicore cup can benefit from that.

Zhiqiang Li
  • 401
  • 7
  • 16
  • The ultimate improvement of your workflow would be one that uses the Media Acceleration hardware present on many Android devices. – BlamKiwi Nov 26 '14 at 09:45
  • I'm not very familiar with media acceleration on android platform, do you mean opengl es? – Zhiqiang Li Nov 26 '14 at 09:52
  • where does opengl come into play here ? are you grabbing your 'additional Mats' from some surface ? can't you do the overlay in yuv space, and save the conversions ? – berak Nov 26 '14 at 09:56
  • 2
    @ZhiqiangLi No I mean things like OpenMAX and Stagefreight. – BlamKiwi Nov 26 '14 at 10:00
  • @berak Oh, what I mean by saying using opengl is using opengl with a off screen frame buffer to process the AVFrame I grabbed using ffmpeg instead of opencv, then using glReadPixels read the result. I'm overlaying some png images with alpha on the mat, I don't know how to do it in yuv color space. Could you give me a hint? – Zhiqiang Li Nov 26 '14 at 10:02
  • @MorphingDragon I didn't know stagefright before, I'll take a look on that. – Zhiqiang Li Nov 26 '14 at 10:16

3 Answers3

2

OpenGL ES is not designed to process video directly. You need to use some EGL extensions and use shader code to convert frames from YUV to RGB color space. That provides the biggest performance gain. You also must not use glTexImage2D() and glReadPixels(). This answer has links to articles that show how to use OpenGL ES for video.

Community
  • 1
  • 1
ClayMontgomery
  • 2,786
  • 1
  • 15
  • 14
0

You can try native media API, coming in NDK r10. See the example.

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307
0

I think you are getting the same problem as mine.

For now, I think one of the solutions is adding a buffer, and process some of the frames at the beginning, then display it a while. And my thread to continue put the frame to Mat to buffer, while another thread can process the Mat in buffer with openCV. With a buffer, you do not need to wait for it to process the whole video.

Let me know if it works for you, but I agree with you that only devices with multi cores can benefit.

See Android process video frame from Bitmap to Mat has a significant delay

Pandemonium
  • 7,724
  • 3
  • 32
  • 51
holopekochan
  • 595
  • 2
  • 10
  • 18