2

AR (augmented reality) seems to be what all iOs developer are looking at this days. I'm playing with a very classic pet project, rolling dices with textures and if possible the camera stream on a dice facet. I face a few issues for this last part and I have some questions for the experts:

Getting the video stream requires AV Foundation : AVCaptureVideoDataOutputSampleBufferDelegate to get an image buffer, and then build a UIImage using Quartz function like CGBitmapContextCreate. This is demonstrated in http://www.benjaminloulier.com/articles/ios4-and-direct-access-to-the-camera or in the apple AV Foundation Programming Guide (see https://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW30)

Then I can create a texture like I did for my "still" images and use it with a GLKBaseEffect (or a shader?).

Question 1: GLKBaseEffect seems really nice and simple, but should I aim for OpenGL 2 and shaders ?

Now the RosyWriter demo from apple uses the Core Video CVOpenGLESTextureCacheCreateTextureFromImage function to bind directly the texture without creating an intermediary UIImage. This is - as stated in the demo description - new in IOS 5.

Question 2: is this a better way to map the texture ?

Question 3: There are a few interesting frameworks, like GPUImage, or even 3D Gaming frameworks that could be used too. Has anyone some feedback on using these ? The apple provided frameworks seem quite complete to me so far.

Thanks a lot !

lazi74
  • 78
  • 7

1 Answers1

3

In response to your various questions:

GLKBaseEffect seems really nice and simple, but should I aim for OpenGL 2 and shaders ?

GLKBaseEffect is just a wrapper around some simple OpenGL ES 2.0 shaders, so it uses 2.0 at its heart. While GLKit provides some nice conveniences, eventually you will need to create effects that it cannot give you by default, so you'll most likely need to learn how to do your own vertex and fragment shaders at some point. I list some resources for learning these in this answer.

Now the RosyWriter demo from apple uses the Core Video CVOpenGLESTextureCacheCreateTextureFromImage function to bind directly the texture without creating an intermediary UIImage. This is - as stated in the demo description - new in IOS 5.

Question 2: is this a better way to map the texture ?

Yes, on iOS 5.0, using the texture caches to upload video frames can lead to some solid performance improvements. For an iPhone 4S, I saw frame upload times drop from 9.3 ms for a 640x480 frame to 1.8 ms when using the texture caches. There are even larger benefits to be had when reading from the texture caches to encode video.

Question 3: There are a few interesting frameworks, like GPUImage, or even 3D Gaming frameworks that could be used too. Has anyone some feedback on using these ? The apple provided frameworks seem quite complete to me so far.

I would recommend using GPUImage for this, but I'm a little biased, seeing as how I wrote it. As a more specific point, I do almost exactly what you describe (reading video frames and mapping them to the sides of a rotating cube) in the CubeExample within the sample code for that framework.

That example is a little more complex, in that I take live video, run it through a sepia tone filter, read that in as a texture and display it on a 3-D cube you can rotate with your fingers, and then take the rendered view of the cube an run that through a pixellation filter. However, you can extract just the portions you need from this example. By doing this, you'll save a lot of code, because I take care of the video capture and uploading to OpenGL ES for you.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571