1

I am newbie to iOS, but I have implemented an FFmpeg-based playback routine on Android. I am planning to do it again on iOS.

It appears that I can use OpenGL ES 1.1 / 2.0 to draw frames from FFmpeg video to the screen. However, OpenGL ES seems difficult. Additionally, I ran into some limits due to the width of the video texture. I had to split the video frame into many images and draw them to the screen to compose the frame.

Is there an easier way to render this video using OpenGL ES on iOS? Is there any other way to draw 2-D video frames to the screen quickly in iOS?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
qrtt1
  • 7,746
  • 8
  • 42
  • 62
  • What do you mean by continuously? And, why do you "split the whole video image to many texture and draw them as 'one' ", as opposed to drawing the entire image once? – appas Dec 19 '11 at 15:10
  • I try to explain it. I use the 'continuously' because the playback should update the video image frequently. It's usually up to 30 fps (some video greater than 30 fps). The OpenGL-ES can do the best performance, but I a litte hard to us. I hope to compare many solution to our team. Maybe the 'continuously' said, "the performace is accpeted" – qrtt1 Dec 19 '11 at 15:16
  • The width of texture should be power-of-2. For example a video size 640x480, they are not power-of-2. I should split it to fit to different texture size. – qrtt1 Dec 19 '11 at 17:07
  • Ok, I've added an answer. If it helps, please edit your question to better reflect the issue you were having. – appas Dec 20 '11 at 13:34
  • 3
    All modern iOS devices do support non-power-of-two textures, even in OpenGL ES 1.1 (2.0 has NPOT support in the default specification): http://stackoverflow.com/a/4761453/19679 Only the models older than the iPhone 3G S (iPhone, iPhone 3G, and 1st and 2nd generation iPod touch) don't support this extension, and they're a tiny fraction of the iOS devices out there today. – Brad Larson Dec 20 '11 at 20:42
  • @BradLarson Thanks your information, I will try it as sonn as possible. – qrtt1 Dec 21 '11 at 07:12

1 Answers1

1

Ah. So you want to render a non-POT source. This can be done without splitting to multiple textures - by creating the closest POT-sized texture, rendering to that and blitting only the part that actually contains the image. Have a look here for an example (C++). The relevant parts:

//Calculating the texture size
double exp = ceil(log((double)max(Texture.HardwareHeight, Texture.HardwareWidth))/log(2.0));
texsize = min(pow(2, exp), (double)maxSize);

then

//draw the original frame 1:1 into the (larger) texture
float ymax = min(2.0f*((float)Texture.HardwareHeight/(float)texsize) - 1.0f, 1.0f);     //clamping is for cards with
float xmax = min(2.0f*((float)Texture.HardwareWidth/(float)texsize) - 1.0f, 1.0f);      //smaller max hardware texture than the current frame size

Use these maximum values instead of 1.0 when as texture coordinates when rendering.

appas
  • 4,030
  • 2
  • 19
  • 17