1

Sometimes the term Graphics Context is a little bit abstract. Are they actually system resource, but they are resource from the Graphics Card, just like a file handle is system resource from the hard drive or any permanent storage device?

Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data. (update: and in write mode, we can go to any point in a 200MB file and change data, just like we have the canvas of the Graphics Context and draw things on top of it)

So Graphics Context are actually global, system-wide resource. They are not part of the application singleton or anything, just like a file or file handle is not (necessarily) part of the application singleton.

And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.

Is this how a Graphics Context work actually, on iOS and most other common OS in general?

Jeremy L
  • 3,770
  • 6
  • 41
  • 62

2 Answers2

3

I think it's best not to think of a Graphics Context in terms of a specific system resource. As far as I know, the graphics context doesn't correspond to any specific resource anymore than an any class 'object' does, besides memory of course. Really, the Graphics context is designed to provide a 'canvas' for the core graphics functions to operate on. The truth is, Apple doesn't give us the specific details of how a graphics context works internally. But there are several things we do know about it:

  1. The graphics context is basically a 'state' more than anything else. It holds information such as stoke/fill color, line width, etc for a particular set of drawing routines.

  2. It doesn't process on the GPU. Instead it processes (does all it's drawing) on the CPU and 'passes' the resulting image (some form of a bit map) to the GPU for display/animation (actually it renders the image directly to the GPU's buffers). This is why the 'renderInContext' method isn't working so well in the new iPad 3. renderInContext gives you the image first, which involves rendering and copying the image. If you wish to then display it, it must be passed back to Core Graphics which then writes the image back out. On the iPad 3, this involves a lot of memory (depending on the size of the view) and can easily overflow buffers.

  3. The graphics contexts given to the 'drawRect' method of UIView is designed to provide a context that is as efficient as possible. This is why you can't draw anything in a view outside a context, nor can you create your own context for a view to draw in. The actual drawing is handled in the run loop, which is why we use this method to flag a UIView as needing to be drawn: [view setNeedsDisplay].

  4. The graphics contexts for UIViews are drawn on the main thread and yes, again, processed on the CPU. This does mean overly complex drawings can tie up your main application, but now days with multi-core processors that's not so much of a problem.

  5. You can create a graphics context, but only to draw to draw to an image. This is exactly the same thing as what a UIView context does, except that it's meant to be used by you rather than drawn to the screen or animated. Since iOS 4, you can process these image contexts in other threads (besides the main thread).

    If you're looking to do GPU drawing, I believe the only way to do this is to use OpenGL if you're using iOS. If you're using MacOS, I think you can actually enable Quartz (core-graphics...same thing) drawing on the GPU using QuartzGL. But it may not be worth the effort, see this article: Mac QuartzGL (2D drawing on the graphics card) performance

Update

As you can see in the comments below, the current arrangement Apple has for Quartz drawing is probably the best, especially since views are drawn directly to the GPU buffers. There is a temptation to think that processing anything visual should be done on the GPU but the truth is, GPU's aren't designed for vector drawings. They're designed to handle massive transforms, lighting, texture mapping, etc. By using the CPU to process vector drawing and leaving everything else to the GPU Apple has split the graphics processing appropriately. Moreover, you're not loosing any efficiency in the data transfer between the CPU and GPU since Quartz is drawing directly to the GPU's buffer (which avoids that onerous memcpy).

Community
  • 1
  • 1
Aaron Hayman
  • 8,492
  • 2
  • 36
  • 63
  • That article is definitely worth reading, especially the conclusion: Apple tried to speed up Quartz by using the GPU for more general-purpose drawing, but in practice it wasn't consistently faster. In the future that might change, nobody knows. – Kurt Revis May 27 '12 at 18:36
  • I thought before that Quartz, Core Graphics, Core Animation were all built on top of OpenGL? So I guess it is not. However, if OpenGL can use the powerful feature of the graphics card, then iOS being able to use any hardware features, should be able to directly use any powerful features of the graphics card even if it doesn't touch OpenGL at all? – Jeremy L May 27 '12 at 19:07
  • Well, even though the drawing itself is done on the CPU, iOS still leans heavily on the GPU for animations and some of the drawing. This is especially noticeable for 3D animations and transparency. If the UIView/CGContext you draw on is transparent, it will render any transparency in the drawing to the image it passes to the GPU. The GPU is then responsible for displaying the final 'image', including all filtered transparency. This is a big deal as doing this on the CPU is hardly feasible. Even with the GPU, you get noticeably better performance by making as many views opaque as possible. – Aaron Hayman May 27 '12 at 19:35
  • @KurtRevis It's actually not surprising that the GPU isn't better at 'general purpose drawing'. GPU's are primarily designed to be massive matrix calculators. They're particularly geared towards handling textures, shapes, light sources, etc... not drawing routines. This is one reason why all gaming systems still have a CPU in conjunction with a GPU. Even though we think 'graphics' with GPU, in reality GPU's are designed for a very specific type of graphics. Really, you don't want the GPU handling general drawing routines. It's not designed for that, but the CPU handles it just fine. – Aaron Hayman May 27 '12 at 19:43
  • @AawronHayman -- I'm well aware of all that, that's a good chunk of the reason why the QuartzGL attempt (as described in that article) was so difficult, and didn't end up working well. – Kurt Revis May 27 '12 at 20:30
  • 1
    @JeremyL - As I describe in this answer: http://stackoverflow.com/a/7559897/19679 , Core Animation is built on top of OpenGL (ES), and Core Graphics (Quartz) draws to it when rendering to the screen in iOS. This is why it is much slower to draw your vector content into a CALayer or CALayer-backed UIView than it is to translate, scale, or rotate that view or layer afterward. – Brad Larson May 27 '12 at 21:18
  • It should be noted that the article linked to in the answer concerns the initial rasterization phase, where 2-D vector graphics are drawn by Quartz and converted into bitmaps. iOS and the Mac (for layer-backed NSViews there) are using GPU acceleration for what happens after that, when the rasterized bitmaps for each CALayer are uploaded to the GPU as textures and then handled by the GPU from then on. On iOS, the GPU is involved with every UI element on display, because everything is layer-backed, just not at the rasterization phase. This acceleration comes in when arranging layers onscreen. – Brad Larson May 27 '12 at 21:24
  • @BradLarson Even though I say that Quartz draws to an image/texture and passes that to the GPU, I believe that Quartz (at least on iOS) will actually draw directly to the GPU buffers, avoiding a memcpy. Do you know if this is the case? I'm pretty sure they mentioned this in a WWDC video but I have no idea which one anymore. – Aaron Hayman May 28 '12 at 14:54
  • @AaronHayman - Yes, I believe you're correct. It would make sense to cut out the middle phase of the extra copy there and use direct memory access (like we see with the texture caches in iOS 5.0). The rasterization is still CPU-bound, though. QuartzGL, as I've heard, was an attempt to do the rasterization itself on-GPU using geometry and shaders, and Matt's benchmarks make sense in that regard. – Brad Larson May 28 '12 at 16:55
1

Sometimes the term Graphics Context is a little bit abstract.

Yes, intentionally so. Quartz is meant to be an abstraction, a general-purpose drawing system. It may or may not perform some optimizations with the graphics hardware, internally, but you don't get to have much visibility into that. And the kinds of optimizations it makes may change over time and with different kinds of graphics hardware.

Are they actually system resource, but they are resource from the Graphics Card

No, absolutely not. Quartz is a software renderer -- it works even when there is no graphics hardware present, and can draw to things like PDFs where the graphics hardware wouldn't be of any use.

Internally, Quartz (and its interfaces with the rest of the OS) may have a few "fast paths" that take advantage of the GPU in some situations. But that's by no means the common case.

Just as a file handle has some states about whether the file handle is for read only or read/write, and the current position for the next read operating -- the states, a Graphics Context has states about the current stroke color, stroke width, or any relevant data.

This is correct.

So Graphics Context are actually global, system-wide resource.

No. Quartz is just a library that runs code within your app. If you make a new CGContext, only your app is affected -- exactly the same way as if your code created a new instance of one of your own classes.

And if there is no powerful graphics card (or if the graphics card ran out of resource already), then the operating system can simulate a Graphics Context using low level graphics routines using bitmaps, instead of letting the graphics card handle it.

You have the two cases flipped. In general Quartz is working in software, with bitmaps. In a few cases, it may use the GPU to get those bitmaps on the screen faster, if everything is lined up exactly right.

Kurt Revis
  • 27,695
  • 5
  • 68
  • 74