9

Actual Question

Several answers will solve my problem:

  1. Can I force a CGImage to reload its data from a direct data provider (created with CGDataProviderCreateDirect) like CGContextDrawImage does? Or is there some other way I can get setting to self.layer.contents to do it?
  2. Is there a CGContext configuration, or trick I can use to render 1024x768 images at least 30 fps consistently with CGContextDrawImage.
  3. Has anyone been able to successfully use CVOpenGLESTextureCacheCreateTextureFromImage for realtime buffer updates with their own texture data? I think my biggest problem is creating a CVImageBuffer as I copied the other properties from Apples' documentation for textures. If anyone has any more information on this that would be awesome.
  4. Any other guidelines on how I can get an image from memory onto the screen at 30 fps.

Background (lots):

I am working on a project where I need to modify the pixels of NPOT image data in realtime (minimum of 30 fps) and draw that on the screen in iOS.

My first thought was to use OpenGL with glTexSubimage2D to update, unfortunately that ended up being really slow (6 fps on iPad) as the driver swizzels and converts my RGB data every frame to BGR. So send it in BGR you say, and so do I but for some reason you cannot call glTexSubImage2D with GL_BGR go figure. I know some slowness is because of it being non power of 2 image data but my requirements dictate that.

More reading led me to CVOpenGLESTextureCacheCreateTextureFromImage but all examples are of it using direct camera input to obtain a CVImageBufferRef I tried using the documentation (no official yet just header comments) to make my own CVImageBuffer form my image data, but it would not work with this (no errors just an empty texture in the debugger), which makes me think Apple built this specifically to process realtime camera data and it has not been tested much outside of this area but IDK.

Anyway after giving up my dignity by dumping OpenGL and switching my thoughts to CoreGraphics I was led to this question fastest way to draw a screen buffer on the iphone which recommends using a CGImage backed by CGDataProviderCreateDirect, which allows you to return a pointer to image data when the CGImage needs it, awesome right? Well it doesn't seem to quite work as advertised. If I use CGContextDrawImage then everything works. I can modify the pixel buffer, and every draw, it requests the image data from my data provider like it should, calling the methods in CGDataProviderDirectCallbacks (Note: they seem to have a built in optimization the ignores the updated pointer if it has the same address as the pervious). CGContextDrawImage is not super fast (about 18 fps) even with disabling interpolation which brought that up from like 6 fps. Apple's docs tell me using self.layer.contents will be much faster than CGContextDrawImage. Using self.layer.contents works for the first assignment but the CGImage never requests a reload from the data provider like the CGContextDrawImage does, even when I call [layer setNeedsDisplay]. In the SO question I referenced the user shows his solution to the problem by creating and destroying a new CGImage from the data source every frame, a hopelessly slow process (yes I did try it), so time for the real question.

Note: I have profiled all these operations and know the problem really is glTexSubImage for OpenGL and CGContextDrawImage is really the problem from CoreGraphics so no "go profile" answers.

EDIT Source code demonstrating this technique can now be found at http://github.com/narpas/image-sequence-streaming

Community
  • 1
  • 1
Justin Meiners
  • 10,754
  • 6
  • 50
  • 92
  • Regarding 1), no, CGImageRefs are immutable. You can of course create a new CGImageRef using the modified provider. In 2) what is the fps you get now? Also, don't you need to concern yourself with retina too - that will be 2x the size... In the past I created a "movie" using the AV framework and still images (screen shots), and I think I got 20-30 fps on the Mac. The idea there is to make a compressed video object that will most likely get hardware assist when playing. Trying to blit the number of pixels you want will be problematic, but I have no idea on the absolute performance boundaries – David H Aug 30 '12 at 11:49
  • What's the source of these images? I have code here for using the texture caches with movie sources: http://stackoverflow.com/a/10656390/19679 , if that's what you're looking to do here. I've also done this with raw input bytes, so the texture caches will work for those, but I can't seem to find my code for that right now. I do remember that the caches don't gain you much for uploading a single image, but they're a win for lots of updated frames in a row. – Brad Larson Aug 30 '12 at 15:29
  • @DavidH like i mentioned though with CGContextDrawImage the CGImage is updated so internally somehow there must be a way to reload it, its only with self.layer.contents that I cant get it to reload. – Justin Meiners Aug 30 '12 at 15:59
  • @BradLarson In your code the AVAsset stuff is creating the CVImageBufferRef for you. I need to create it manually with CVPixelBufferCreate or something similar which is I think where my error is. My data is just a raw pixel array RGB that openGL would normally use, so somehow I need to get that into a CVImageBufferRef that the cache likes. I will definitely try out some other things I saw different in your code though – Justin Meiners Aug 30 '12 at 16:02
  • @DavidH 2. 18 fps with 1024x768 images using CGContextDrawImage, mentioned in the background section – Justin Meiners Aug 30 '12 at 16:05
  • I just read all the comments in your link above ('fastest way...') - there is a comment towards the end about using TWO CALayers, each backed by an image, and essentially you supply one, while that is showing switch the backing image of the other, switch, etc. You cannot update the data provider and get the same CGImageRef that you already used to "redraw" itself - the CALayer has probably already cached its data. If you switch images it should refetch the data from the new image. Perhaps if you unset the CGImageRef, then set it again, you could force the CALayer to reload the one image. – David H Aug 30 '12 at 16:07
  • @DavidH yeah I thought that as well, I tried that with two CGImageRef bufers swapping every frame, but no luck... It totally is a cache thing because If I create the CGImage with the data provider than modfiy the pixels, than assign self.layer.contents, the image will be of the modified buffer, I just cant ever get it to update after assignment – Justin Meiners Aug 30 '12 at 16:10
  • For the texture caches, what you'll want is something like is described here: http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/ only rather than binding that texture as an FBO output, simply use it as your texture. You can then modify the bytes of the CVPixelBufferRef directly and simply re-render your scene without having to rely on glTexSubimage2D. One caution, though, is that the internal byte format of a texture is BGRA, so you'll still need a swizzling operation in your shader as a first step. This is really fast, though. – Brad Larson Aug 30 '12 at 16:32
  • @BradLarson thanks ill take a look. Im not concerned about the BGRA thing – Justin Meiners Aug 30 '12 at 20:50
  • @BradLarson that worked, and I was able to get the CoreVideo openGL stuff working, unfortuantly it looks like it doesnt really help, im getting the same frame rates and profile shows that it simply calls glTexSubimage2D which does not help me at all, any info on this? I had to call CVOpenGLESTextureCacheCreateTextureFromImage every frame, modifying the pixel buffer does not update the texture.. – Justin Meiners Aug 30 '12 at 21:57
  • @BradLarson Just tested, works on device not simulator thank you so much (probably has to do with divided RAM / VRAM on mac) The actual fix was to create an attributes dictionary that made it IOSurfaced backed (from sample code you gave me) – Justin Meiners Aug 30 '12 at 22:14
  • The Simulator is broken when it comes to texture caches. I use compiler defines to work around that in my code by falling back to glTexImage2D() and the like. In regards to the texture not updating, I recall that had something to do with an IOSurface setting somewhere, but I can't find the code I used to fix that. – Brad Larson Aug 30 '12 at 22:18
  • Any update on this question given the new graphics APIs in macOS? – Rol Feb 03 '21 at 13:00
  • @Rol i'm not involved in the apple ecosystem these days. Are you referring to dropping opengl in favor of metal? Or other CoreGraphics changes? – Justin Meiners Feb 04 '21 at 16:33

1 Answers1

8

Thanks to Brad Larson and David H for helping out with this one (see our full discussion in comments). Turns out using OpenGL and CoreVideo with CVOpenGLESTextureCache ended up being the fastest way to push raw images to the screen (I knew CoreGraphics couldn't be the fastest!), giving me 60 fps with fullscreen 1024x768 images on an iPad 1. There is little documentation on this now so I will try and explain as much as possible to help people:

CVOpenGLESTextureCacheCreateTextureFromImage allows you to create an OpenGL texture that has memory directly mapped to the CVImageBuffer you use to create it. This allows you to create say a CVPixelBuffer with your raw data and modify the data pointer gathered from CVPixelBufferGetBaseAddress. This gives you instant results in OpenGL without any need to modify or reupload the actual texture. Just be sure to lock with CVPixelBufferLockBaseAddress before modifying pixels and unlock when your done. Note, At this time this does not work in the iOS Simulator, only on device which I speculate to be from VRAM/RAM division, where the CPU has no direct access to VRAM. Brad recommended using a conditional compiler check to switch between a raw glTexImage2D updates and using texture caches.

Several things to watch out for (a combination of these caused it to not work for me):

  1. Test on device
  2. Make sure you CVPixelBuffer is backed with kCVPixelBufferIOSurfacePropertiesKey see link for example (Thanks again Brad).
  3. You must use GL_CLAMP_TO_EDGE for NPOT texture data with OpenGL ES 2.0
  4. Bind texture caches with glBindTexture(CVOpenGLESTextureGetTarget(_cvTexture), CVOpenGLESTextureGetName(_cvTexture)); don't be stupid like me and use CVOpenGLESTextureGetTarget for both parameters.
  5. Don't recreate the texture every frame simply copy image data into the pointer obtained from CVPixelBufferGetBaseAddress to update the texture.
Justin Meiners
  • 10,754
  • 6
  • 50
  • 92
  • I think that's CVOpenGLESTextureCacheCreateTextureFromImage and not CGOpenGLESTextureCacheCreateTextureFromImage otherwise well done! –  Sep 07 '12 at 06:45
  • +1 thanks buddy, this worked a treat. Can also confirm we don't need to create textures each time having just now got our project going. We also don't call CVOpenGLESTextureCacheFlush which the Apple samples are doing. Totally agree re confusing documentation. Thanks again –  Sep 11 '12 at 04:30
  • @MickyDuncan any idea what the CVOpenGLESTextureCacheFlush actually does? Documentation is vague and I saw no memory impacts with or without it – Justin Meiners Sep 11 '12 at 21:03
  • Same here, Instruments did not seem to indicate anything. Maybe it impacts more if we are doing video capture? –  Sep 12 '12 at 00:00
  • @JustinMeiners Would it be possible to post actual code snippet? I'm having a hard time getting this to work... – anna Feb 03 '13 at 18:38
  • @anna I will actually be open sourcing something with it soon, I will give you think when it is ready. – Justin Meiners Feb 19 '13 at 22:11
  • @JustinMeiners Thanks for replying. That would be awesome. I look forward. – anna Feb 23 '13 at 01:52
  • @JustinMeiners Did you open source something already? Where to download? :) – OMH Jun 14 '13 at 07:05
  • 1
    @OHM it is not complete, but i will try and clean up something tomorrow or sunday. – Justin Meiners Jun 15 '13 at 06:03
  • 1
    @JustinMeiners Thanks! Do you have a github or something? – OMH Jul 02 '13 at 12:37
  • @OMH still working on it - i will let you know as soon as it happens (hopefully soon?) – Justin Meiners Jul 02 '13 at 17:02
  • 1
    @OMH Thank you for waiting patiently (and motivating me to finish), I have a project posted https://github.com/narpas/image-sequence-streaming The source is not committed, but it should be up within the next day. – Justin Meiners Jul 02 '13 at 20:20
  • It's the 7th. *whip crack* =) Could you at least post a snippet for the above? – aoakenfo Jul 08 '13 at 01:42
  • @aoakenfo haha thanks for being persistent - the actual code is done and ready to be pushed - one of the datasets I use as an example was used for a project I did for another company. I cannot use this as it is owned by them - I am creating replacement data. – Justin Meiners Jul 08 '13 at 01:44
  • @aoakenfo it's uploaded now – Justin Meiners Jul 19 '13 at 22:59