AR (augmented reality) seems to be what all iOs developer are looking at this days. I'm playing with a very classic pet project, rolling dices with textures and if possible the camera stream on a dice facet. I face a few issues for this last part and I have some questions for the experts:
Getting the video stream requires AV Foundation : AVCaptureVideoDataOutputSampleBufferDelegate
to get an image buffer, and then build a UIImage
using Quartz function like CGBitmapContextCreate
. This is demonstrated in http://www.benjaminloulier.com/articles/ios4-and-direct-access-to-the-camera or in the apple AV Foundation Programming Guide (see https://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW30)
Then I can create a texture like I did for my "still" images and use it with a GLKBaseEffect
(or a shader?).
Question 1: GLKBaseEffect
seems really nice and simple, but should I aim for OpenGL 2 and shaders ?
Now the RosyWriter demo from apple uses the Core Video CVOpenGLESTextureCacheCreateTextureFromImage
function to bind directly the texture without creating an intermediary UIImage
. This is - as stated in the demo description - new in IOS 5.
Question 2: is this a better way to map the texture ?
Question 3: There are a few interesting frameworks, like GPUImage, or even 3D Gaming frameworks that could be used too. Has anyone some feedback on using these ? The apple provided frameworks seem quite complete to me so far.
Thanks a lot !