1

I want to make a screenshot of OpenGLES and UIKit at a time and after a big research I found a way exactly like this:

- (UIImage *)makeScreenshot {

    GLint backingWidth, backingHeight;

    // Bind the color renderbuffer used to render the OpenGL ES view
    // If your application only creates a single color renderbuffer which is already bound at this point,
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
    //    glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);

    // Get the size of the backing CAEAGLLayer
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

    //    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
    NSInteger x = _visibleFrame.origin.x, y = _visibleFrame.origin.y, width = _visibleFrame.size.width, height = _visibleFrame.size.height;
    NSInteger dataLength = width * height * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    //    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast, ref, NULL, true, kCGRenderingIntentDefault);

    // OpenGL ES measures data in PIXELS
    // Create a graphics context with the target size measured in POINTS
    NSInteger widthInPoints, heightInPoints;
    if (NULL != UIGraphicsBeginImageContextWithOptions) {
        // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
        // Set the scale parameter to your OpenGL ES view's contentScaleFactor
        // so that you get a high-resolution snapshot when its value is greater than 1.0
        CGFloat scale = _baseView.contentScaleFactor;
        widthInPoints = width / scale;
        heightInPoints = height / scale;
        UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
    }
    else {
        // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
        widthInPoints = width;
        heightInPoints = height;
        UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
    }

    CGContextRef cgcontext = UIGraphicsGetCurrentContext();

    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

    // Retrieve the UIImage from the current context
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);

//    return image;

    UIImageView *GLImage =  [[UIImageView alloc] initWithImage:image];

    UIGraphicsBeginImageContext(_visibleFrame.size);

    //order of getting the context depends on what should be rendered first.
    // this draws the UIKit on top of the gl image

    [GLImage.layer renderInContext:UIGraphicsGetCurrentContext()];

    CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -_visibleFrame.origin.x, -_visibleFrame.origin.y);

    [_baseView.layer renderInContext:UIGraphicsGetCurrentContext()];

    UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // Do something with resulting image
    return finalImage;
}

but the interesting part would be the merging section. Where I am having two

UIGraphicsBeginImageContext();
.......
.......
UIGraphicsEndImageContext();

blocks. First generating the OpenGLES image and then merging with the UIKit image. Is there a better way to do that with single UIGraphicsBeginImageContext(); ... UIGraphicsEndImageContext(); block rather creating UIImageView and then perform the render?

something like:

CGContextRef cgcontext = UIGraphicsGetCurrentContext();

// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

// the merging part starts

CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -_visibleFrame.origin.x, -_visibleFrame.origin.y);

[_baseView.layer renderInContext:UIGraphicsGetCurrentContext()];

// the merging part ends

// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

but unfortunately its not merging. Can anyone correct the mistake here and/or find the best way to do that?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Goppinath
  • 10,569
  • 4
  • 22
  • 45

1 Answers1

0

UISnapshotting

With iOS 7 Apple introduced UISnapshotting and they claim it's really fast, much faster than renderInContext:.

UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];

This method captures the current visual contents of the screen from the render server and uses them to build a new snapshot view. You can use the returned snapshot view as a visual stand-in for the screen’s contents in your app. (...) this method is faster than trying to render the contents of the screen into a bitmap image yourself.

Moreover, have a look into links below. They should give you some insights and point to the right direction.

Community
  • 1
  • 1
Rafał Sroka
  • 39,540
  • 23
  • 113
  • 143