6

I have a problem, with openGL views. I have two openGL views. The second view is added as a subview to the mainview. The two opengl views are drawn in two different opengl contexts. I need to capture the screen with the two opengl views.

The issue is that if I try to render one CAEAGLLayer in a context as below:

CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 1*(self.frame.size.width*0.5), 1*(self.frame.size.height*0.5));
CGContextScaleCTM(context, 3, 3);
CGContextTranslateCTM(context, abcd, abcd);

CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.myOwnView.layer;
[eaglLayer renderInContext:context];

it does not work. If I see the context (given the output as an image), The contents in the opengl layer are missing. But I find the toolbar and 2d images attached to the view, in the output image. I am not sure of the problem. Please help.

user862972
  • 224
  • 5
  • 13

3 Answers3

6

I had a similar problem and found a much more elegant solution. Basically, you subclass CAEAGLLayer, and add your own implementation of renderInContext that simply asks the OpenGL view to render the contents using glReadPixels. The beauty is that now you can call renderInContext on any layer in the hierarchy, and the result is a fully composed, perfect looking screenshot that includes your OpenGL views in it!

Our renderInContext in the subclassed CAEAGLLayer is:

- (void)renderInContext:(CGContextRef)ctx
{
    [super renderInContext: ctx];
    [self.delegate renderInContext: ctx];
}

Then, in the OpenGL view we replace layerClass so that it returns our subclass instead of the plain vanilla CAEAGLLayer:

+ (Class)layerClass
{
    return [MyCAEAGLLayer class];
}

We add a method in the view to actually render the contents of the view into the context. Note that this code MUST run after your GL view has been rendered, but before you call presentRenderbuffer so that the render buffer will contain your frame. Otherwise the resulting image will most likely be empty (you may see different behavior between the device and the simulator on this particular issue).

- (void) renderInContext: (CGContextRef) context
{
    GLint backingWidth, backingHeight;

    // Bind the color renderbuffer used to render the OpenGL ES view
    // If your application only creates a single color renderbuffer which is already bound at this point,
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
    glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);

    // Get the size of the backing CAEAGLLayer
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);

    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
    NSInteger dataLength = width * height * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                    ref, NULL, true, kCGRenderingIntentDefault);

    CGFloat scale = self.contentScaleFactor;
    NSInteger widthInPoints, heightInPoints;
    widthInPoints = width / scale;
    heightInPoints = height / scale;

    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(context, kCGBlendModeCopy);
    CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);
}

Finally, in order to grab a screenshot you use renderInContext in the usual fasion. Of course the beauty is that you don't need to grab the OpenGL view directly. You can grab one of the superviews of the OpenGL view and get a composed screenshot that includes the OpenGL view along with anything else next to it or on top of it:

UIGraphicsBeginImageContextWithOptions(superviewToGrab.bounds.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[superviewToGrab.layer renderInContext: context]; // This recursively calls renderInContext on all the sublayers, including your OpenGL layer(s)
CGImageRef screenShot = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
ldoogy
  • 2,819
  • 1
  • 24
  • 38
  • I'm trying your approach but glReadPixels doesn't seem to be copying any data into my buffer (the "data" variable in your example). Any thoughts as to why this might be the case? My setup is a regular UIView whose backing layer is a CAEAGLLayer. I then add a UIImageView with an image as a child of the GL-backed view. Then I try to grab the renderBuffer contents and my buffer is still all 0s as when I initialized it with calloc. I also tried setting the layer as the renderbuffer storage but nothing. If I do the regular renderInContext I do get the contents back as an image. – SaldaVonSchwartz Sep 27 '13 at 18:01
1

This question has already been settled, but I wanted to note that Idoogy's answer is actually dangerous and a poor choice for most use cases.

Rather than subclass CAEAGLLayer and create a new delegate object, you can use the existing delegate methods which accomplish exactly the same thing. For example:

- (void) drawLayer:(CALayer *) layer inContext:(CGContextRef)ctx;

is a great method to implement in your GL-based views. You can implement it in much that same way he suggests, using glReadPixels: just make sure to set the Retained-Backing property on your view to YES, so that you can call the above method anytime without having to worry about it having been invalidated by presentation for display.

Subclassing CAEAGL layer messes with the existing UIView / CALayer delegate relationship: in most cases, setting the delegate object on your custom layer will result in your UIView being excluded from the view hierarchy. Thus, code like:

customLayerView = [[CustomLayerView alloc] initWithFrame:someFrame];
[someSuperview addSubview:customLayerView];

will result in a weird, one-way superview-subview relationship, since the delegate methods that UIView relies on won't be implemented. (Your superview will still have the sublayer from your custom view, though).

So, instead of subclassing CAEAGLLayer, just implement some of the delegate methods. Apple lays it out for you here: https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CALayerDelegate_protocol/Reference/Reference.html#//apple_ref/doc/uid/TP40012871

All the best,

Sam

Sam Ballantyne
  • 487
  • 6
  • 18
  • I'm going to post a belated response to this: Subclassing the CAEAGLLayer is perfectly safe and will result in 100% identical behavior. That's because the way in which we subclass CAEAGLLayer is not by manually initializing it with our own class, but rather by providing the class as the layerClass for the UIView (which, if you read my answer, you will see was the method I recommended). This ensures that the initialization is done by the system, and so everything is EXACTLY THE SAME as it would have been without the subclass, delegates and all. – ldoogy Mar 04 '16 at 02:07
0

I think http://developer.apple.com/library/ios/#qa/qa1704/_index.html provides what you want.

MrMage
  • 7,282
  • 2
  • 41
  • 71
  • Thank you for the response. I have two opengl views (textures) one over the other. I need to capture the resultant of two views. I am not very sure, if the pointer helps me get what I want. – user862972 Sep 24 '12 at 12:55
  • The two textures are drawn in two different opengl contexts. – user862972 Sep 24 '12 at 13:21
  • Just execute the code once for each context, then composite the two images into one as you need. – MrMage Sep 24 '12 at 16:18