I'm doing some video processing on android using ndk.
What I'm doing now is decode the video using ffmpeg, grab the frame, convert it to opencv mat, then overlay some images on it, then convert it back to avframe, encode it back to a video.
But it's very slow! It takes about 50 seconds to process a 480*480 video with 15 fps.
I tried to do this using opengl es, reading the result image using glReadPixels, but from what I understand, glReadPixels doesn't really work on some devices. So I had to give it up.
As I understand, I have some very expensive operations in my current workflow,
- covert a AVFrame from yuv to bgr color space, then convert to opencv mat
- overlay a mat on another mat
- covert a opencv mat to AVFrame, then convert the frame from bgr to yuv space, then encode it into a video.
So, are there ways to improve my current workflow?
I'm adding multithread feature, but only devices with multicore cup can benefit from that.