9

I have a window with a main view of type NSView and a subview which is a subclass of NSOpenGLView whose name is CustomOpenGLView. The subclass of NSOpenGLView is obtained through a Custom View in Interface Builder and by setting its class to CustomOpenGLView. This is made according to the Apple Sample Code Layer Backed OpenGLView.

The app is made to draw something to the OpenGLContext every, let's say, 0.05 seconds. With Core Animation Layer disabled I am able to see the moving object in the view, which is the consequence of the continuous redrawing of the view. And everything works flawlessly.

I now want to have a semitransparent view on top of CustomOpenGLView to house control buttons like play/stop/ecc..

To do this I have add a subview to CustomOpenGLView and I have enabled Core Animation Layer on CustomOpenGLView. Control buttons are placed in this new subview.

This way the view with control buttons correctly appears on top of CustomOpenGLView but now the view doesn't redraw. It draws only if I resize the window containing all these views.

The result is that I do not see any "animation"...I only see a still image which represents the first frame which gets drawn when the drawing loop starts. If I resize the window, openGLContext gets redrawn until I stop resizing the window. After that I see once again a still image with the last drawing occurred during the resize.

In addition, when the drawing loop starts, only the first "frame" appears on screen and if I resize the window, let's say, 5 seconds later, I see in the view exactly what it should have been drawn 5 seconds after the starting of drawing loop. It seems like I need to set [glView setNeedsDisplay:TRUE]. I did that but nothing has changed.

Where is the mistake? Why does adding Core Animation Layer break the redraw? Does it imply something I'm not getting?

Andrea3000
  • 1,018
  • 1
  • 11
  • 26
  • Is there a reason why you can't use a CAOpenGLLayer instead? Then you wouldn't need to worry about layer backing, since your content would be hosted directly in a layer. Like you, I've seen artifacts when trying to back an NSOpenGLView with a CALayer.. – Brad Larson Sep 30 '11 at 13:06
  • @BradLarson I had a lot of problems with CAOpenGLLayer. I'm a newbie developer, I have read some tutorial about the transition from NSOpenGLView to CAOpenGLLayer but I'm not able to make it work. The app uses a `CVDisplayLink` and the draw is ruled by the displaylink callback. I know that CAOpenGLLayer doesn't need CVDisplayLink and stuff but I get some problmes that I'm not able to solve. The app is actually a Movie player and through the displaylink callback I check if a new frame is available for the given time; if a new frame is available, it gets drawn. – Andrea3000 Sep 30 '11 at 16:49
  • @BradLarson (I continue here since I have reached the maximum number of characters in the previous comment) With NSOpenGLView this works as expected. With CAOpenGLLayer I have a problem on checking if new image is available since the instance variable is always `nil` – Andrea3000 Sep 30 '11 at 16:52
  • With CAOpenGLLayer I have a problem on checking if new image is available since the instance variable (which should call the method for checking the availability of a new frame) is always `nil`. I gave up with CAOpenGLLayer for this reason after many days of struggling.. – Andrea3000 Sep 30 '11 at 17:00
  • 1
    I describe how I used CAOpenGLLayer in one application here: http://stackoverflow.com/questions/6113922/why-is-my-caopengllayer-updating-slower-than-my-previous-nsopenglview/6115717#6115717 . CVDisplayLink and CAOpenGLLayer don't play well together, as I mentioned here: http://stackoverflow.com/questions/5316474/should-i-use-nsoperation-or-nsrunloop/5318372#5318372 , but I've since heard that there are ways of using it to update CAOpenGLLayer without the artifacts I saw. The internal CAOpenGLLayer asynchronous callbacks should work well for most cases, but 50 FPS video causes trouble. – Brad Larson Sep 30 '11 at 18:07
  • @BradLarson: So your suggestion is to use CAOpenGLLayer with `-setAsynchronous:YES` and so without CVDisplayLink? As long as it runs on the main thread, do I risk to have UI responsiveness issues? I don't need 50fps, but surely a constant and alaways smooth 30fps as it is now with NSOpenGLView. – Andrea3000 Oct 01 '11 at 08:34
  • I don't think it will affect your UI responsiveness that much, but because it is running on the main thread, anything that blocks the main thread, like someone pulling down a menu or otherwise interacting with the interface, will temporarily pause the updating in your CAOpenGLLayer. It's for this reason that I went back to NSOpenGLView. However, if this isn't an issue for you, the CAOpenGLLayer asynchronous callbacks and implicit layering can make this all a lot easier. – Brad Larson Oct 01 '11 at 15:31
  • I would prefere to be able to achive a layer-backed NSOpenGLView with the already working CVDisplayLInk but I don't know if it is possbile. I don't know what can cause my drawing issue. Your NSOpenGLView is layer-backed or not? – Andrea3000 Oct 01 '11 at 18:06
  • @BradLarson: I have solved using child window. This seemed to me the easiest way to keep using NSOpenGLView and to have UIControls on top of the view. Thank you very much for your help! – Andrea3000 Oct 07 '11 at 09:19

1 Answers1

25

When you have a normal NSOpenGLView, you can simply draw something via OpenGL and then call -flushBuffer of the NSOpenGLContext to make the rendering appear on screen. If your context is not double buffered, which is not necessary if you render to a window, since all windows are already double buffered by themselves in MacOS X, calling glFlush() is sufficient as well (only for real fullscreen OpenGL rendering, you'll need double buffering to avoid artifacts). OpenGL will then render directly into the pixel storage of your view (which is in fact the backing storage of the window) or in case of double buffering, it will render to the back-buffer and then swap it with the front-buffer; thus the new content is immediately visible on screen (actually not before the next screen refresh but such a refresh takes place at least 50-60 times a second).

Things are a bit different if the NSOpenGLView is layer-backed. When you call -flushBuffer or glFlush(), the rendering does actually take place just as it did before and again, the image is directly rendered to the pixel storage of the view, however, this pixel storage is not the backing storage of the window any longer, it is the "backing layer" of the view. So your OpenGL image is updated, you just don't see it happening since "drawing into a layer" and "displaying a layer on screen" are two completely different things! To make the new layer content visible, you'll have to call setNeedsDisplay:YES on your layer-backed NSOpenGLView.

Why didn't it work for you when you called setNeedsDisplay:YES? First of all, make sure you perform this call on the main thread. You can perform this call on any thread you like, it will for sure mark the view dirty, yet only when performing this call on the main thread, it will also schedule a redraw call for it (without that call it is marked dirty but it won't be redrawn until any other parent/child view of it is redrawn). Another problem could be the drawRect: method. When you mark the view as dirty and it is redrawn, this method is being called and whatever this method "draws" overwrites whatever content is currently within the layer. As long as your view wasn't layer-backed, it didn't matter where you rendered your OpenGL content but for a layer-backed view, this is actually the method where you should perform all your drawings.

Try the following: Create a NSTimer on your main thread that fires every 20 ms and calls a method that calls setNeedsDisplay:YES on your layer-backed NSOpenGLView. Move all your OpenGL render code into the drawRect: method of your layer-backed NSOpenGLView. That should work pretty well. If you need something more reliably than a NSTimer, try a CVDisplayLink (CV = CoreVideo). A CVDisplayLink is like a timer, yet it fires every time the screen has just been redrawn.

Update

Layered NSOpenGLView are somewhat outdated, starting with 10.6 they are not really needed any longer. Internally a NSOpenGLView creates a NSOpenGLLayer when you make it layered, so you can as well use such a layer directly yourself and "building" your own NSOpenGLView:

  1. Create your own subclass of NSOpenGLLayer, let's call it MyOpenGLLayer
  2. Create your own subclass of NSView, let's call it MyGLView
  3. Override - (CALayer *)makeBackingLayer to return an autoreleased instance of MyOpenGLLayer
  4. Set wantsLayer:YES for MyGLView

You now have your own layer backed view and it is layer backed by your NSOpenGLLayer subclass. Since it is layer backed, it is absolutely okay to add sub-views to it (e.g. buttons, textfields, etc.).

For your backing layer, you have basically two options.

Option 1
The correct and officially supported way is to keep your rendering on the main thread. Therefor you must do the following:

  • Override canDrawInContext:... to return YES/NO, depending on whether you can/want to draw the next frame or not.
  • Override drawInContext:... to perform your actual OpenGL rendering.
  • Make the layer asynchronous (setAsynchronous:YES)
  • Be sure the layer is "updated" whenever its resized (setNeedsDisplayOnBoundsChange:YES), otherwise the OpenGL backing surface is not resized when the layer is resized (and the rendered OpenGL context must be stretched/shrunk each time the layer redraws)

Apple will create a CVDisplayLink for you, that calls canDrawInContext:... on main thread each time it fires and if this method returns YES, it calls drawInContext:.... This is the way how you should do it.

If your rendering is too expensive to happen on main thread, you can do the following trick: Override openGLContextForPixelFormat:... to create a context (Context B) that is shared with another context you created earlier (Context A). Create a framebuffer in Context A (you can do that before or after creating Context B, it won't really matter); attach depth and/or stencil renderbuffers if required (of a bit depth of your choice), however instead of a color renderbuffer, attach a "texture" (Texture X) as color attachments (glFramebufferTexture()). Now all color render output is written to that texture when rendering to that framebuffer. Perform all rendering to this framebuffer using Context A on any thread of your choice! Once the rendering is done, make canDrawInContext:... return YES and in drawInContext:... just draw a simple quad that fills the whole active framebuffer (Apple has already set it for you and also the viewport to fill it completely) and that is textured with the Texture X. This is possible, since shared contexts share also all objects (e.g. like textures, framebuffers, etc.). So your drawInContext:... method will never do more than drawing a single, simple textured quad, that's all. All other (possibly expensive rendering) happens to this texture on a background thread and without ever blocking your main thread.

Option 2
The other option is not officially supported by Apple and may or may not work for you:

  • Don't override canDrawInContext:..., the default implementation always returns YES and that's what you want.
  • Override drawInContext:... to perform your actual OpenGL rendering, all of it.
  • Don't make the layer asynchronous.
  • Don't set needsDisplayOnBoundsChange.

Whenever you want to redraw this layer, call display directly (NOT setNeedsDisplay! It's true, Apple says you shouldn't call it, but "shouldn't" is not "mustn't") and after calling display, call [CATransaction flush]. This will work, even when called from a background thread! Your drawInContext:... method is called from the same thread that calls display which can be any thread. Calling display directly will make sure your OpenGL render code executes, yet the newly rendered content is still only visible in the backing storage of the layer, to bring it to screen you must force the system to perform layer compositing and [CATransaction flush] will do exactly that. The class CATransaction, which has only class methods (you will never create an instance of it) is implicitly thread-safe and may always be used from any thread at any time (it performs locking on its own whenever and wherever required).

While this method is not recommend, since it may cause redraw issues for other views (since those may also be redrawn on threads other than main thread and not all views support that), it is not forbidden either, it uses no private API and it has been suggested on the Apple mailing list without anyone at Apple opposing it.

Mecki
  • 125,244
  • 33
  • 244
  • 253
  • Thank you very much for your detailed answer, you explained me a lot of subtle aspects! Prior to implementing what you suggested, I've just one more question. I actually already use a CVDisplayLink with my NON-layered-back `NSOpenGLView`becuase I want to perform all the drawings in a thread which is not the main thread (I need to display contents at 60fps with high UI responsivness). If I switch to a layer-backed `NSOpenGLView` is it still possible to draw from a separate thread? – Andrea3000 Jul 10 '12 at 14:48
  • Unfortunately the answer is no. You can draw to a CALayer (whether it is a layer of its own or the backing layer of any view, including NSOpenGLView) from any thread you like, yet there is no reliably way to make this layer visible on screen other than using the main thread. CALayer really sucks in than aspect. All the ways I try either lead to artifacts or may even crash the app in random intervals. What you can do is not using a NSOpenGLView at all, but a NSOpenGLLayer, which you can put as backing layer to any view you like (just first use setLayer: and **then** setWantsLayer: ... – Mecki Jul 10 '12 at 20:54
  • ... the order of these two calls is critical!). Using a NSOpenGLLayer you have to ways of rendering: Making the layer asynchronous and controlling the rendering by the canDrawInContext... and drawInContext... methods, that you must override, or not setting it asynchronous, make canDraw... always return YES and control the rendering from another thread by calling directly `display` on the layer to update its content and then `[CATransaction flush]` to bring the layer to screen (both from this other thread). This seems to work, even though it is no official documented way of doing things. – Mecki Jul 10 '12 at 20:57
  • Thank you for your answer. This somehow bring to a question I asked here back in february ([http://stackoverflow.com/questions/9442657/...](http://stackoverflow.com/questions/9442657/draw-from-a-separate-thread-with-nsopengllayer)). If I make the `NSOpenGLLayer`asynchronous the drawing will occur on the main thread, right? I don't understand if the second way you explained allow instead to draw from a background thread or not. – Andrea3000 Jul 10 '12 at 21:10
  • @Andrea3000: I added an update to my answer, maybe this is somehow useful for you and helps you to understand what I said in the comments above. – Mecki Jul 11 '12 at 16:55
  • You have been very very helpful!! I think I will try option 2 because I think that drawing from a background thread is a must for my app. Thank you very much for sharing your knowledge with me, really appreciated!! – Andrea3000 Jul 12 '12 at 15:44
  • I have a last one question related to this matter. Apple states that `NSOpenGLLayer` is a subclass of `CAOpenGLLayer` which uses distinctly Application Kit types. Since I'm going to rewrite all the view hierarchy, should I switch to `CAOpenGLLayer` instead of `NSOpenGLLayer`? Does it give me any advantage? – Andrea3000 Jul 12 '12 at 17:42
  • 1
    @Andrea3000 As you noted already, the only difference between NSOpenGLLayer and CAOpenGLLayer is that the first one uses AppKit data types. The implementation of NSOpenGLLayer basically "translates" the AppKit types to the more native types of CAOpenGLLayer and then calls `[self ...]` with the translated types. Of course subclassing CAOpenGLLayer can save you a little bit of processing time, yet the translation of the types and the extra call to `self` are not that expensive and the difference will hardly detectable in most cases. If you prefer working with AppKit types, subclass NSOpenGLLayer – Mecki Mar 28 '13 at 01:02
  • So if everything is rendered to the main thread anyway, why use CVDisplayLink at all? Why not use NSTimer? I used your display and flush hack and it works. But the reason for using CVDisplay link in the first place was so that the interaction + animation would be undisturbed by external data events like compressing etc. – Sentry.co Feb 23 '16 at 14:51
  • 1
    @GitSyncApp Because CVDisplayLink will make sure that rendering is in sync with your screen refresh rate and the graphical sub-system. It makes no sense to render more images than your monitor can physically display or your sub-sytem can actually process. Also a timer doesn't fire at precise intervals, if you miss a screen refresh only by 0.01 ms, then the whole image will be delayed by a full screen refresh (causing animation lag and reducing the number of rendered images displayed a second). The CVDisplayLink is like a high priority optimized timer. – Mecki Feb 23 '16 at 18:49
  • Thanks for answering. So the the upside is that you get higher precision timer. As you describe the only way to get absolute correct refresh rate. The downside is that it may still lag if the cpu gets loaded with other tasks etc because you call from another thread to the main thread. I cant use drawRect because over the "sibling view overlapping issue" so I use CALayer and CGContext to draw graphics. And the only way I could get it working was with your display and flush technique. Here are my notes one what im trying to accomplish: http://stylekit.org/blog/2016/02/20/Core-animation/ – Sentry.co Feb 23 '16 at 20:58
  • @GitSyncApp See http://chat.stackoverflow.com/rooms/104337/opengl-on-macos-x-and-threading – Mecki Feb 23 '16 at 22:03