0

I'm using AVCaptureSession for live cameraview feed then i render some images on the camera overlay view. I didn't use any EAGLView, just overlaying some images using AVCaptureSession with previewlayer. I want to take screenshot live camera feed image and overlay image. So, i searched some links finally got with glReadPixel(), but when i implement this code it returns black image. Just added OpenGLES.framework library and imported.

- (void)viewDidLoad
 {
 [super viewDidLoad];

 [self setCaptureSession:[[AVCaptureSession alloc] init]];


 [self  addVideoInputFrontCamera:NO]; // set to YES for Front Camera, No for Back camera

 [self  addStillImageOutput];

 [self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] ];



 [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];



 CGRect layerRect = [[[self view] layer] bounds];


 [[self previewLayer]setBounds:layerRect];
 [[self  previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
 [[[self view] layer] addSublayer:[self  previewLayer]];


 [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(saveImageToPhotoAlbum) name:kImageCapturedSuccessfully object:nil];


 [[self captureSession] startRunning];


 UIImageView *dot =[[UIImageView alloc] initWithFrame:CGRectMake(50,50,200,200)];
 dot.image=[UIImage imageNamed:@"draw.png"];
 [self.view addSubview:dot];

 }

capturing the live camera feed with overlay content using glReadPixel():

 - (UIImage*) glToUIImage

 {
 CGFloat scale = [[UIScreen mainScreen] scale];
 // CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);

 CGRect  s = CGRectMake(0, 0, 768.0f * scale, 1024.0f * scale);

 uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);



 glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

 CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);


 CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);


 size_t width = CGImageGetWidth(iref);
 size_t height = CGImageGetHeight(iref);
 size_t length = width * height * 4;
 uint32_t *pixels = (uint32_t *)malloc(length);



 CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
 CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);


 CGAffineTransform transform = CGAffineTransformIdentity;
 transform = CGAffineTransformMakeTranslation(0.0f, height);
 transform = CGAffineTransformScale(transform, 1.0, -1.0);
 CGContextConcatCTM(context1, transform);
 CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
 CGImageRef outputRef = CGBitmapContextCreateImage(context1);


 outputImage = [UIImage imageWithCGImage: outputRef];


 CGDataProviderRelease(ref);
 CGImageRelease(iref);
 CGContextRelease(context1);
 CGImageRelease(outputRef);
 free(pixels);
 free(buffer);




 UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);

 NSLog(@"Screenshot size: %d, %d", (int)[outputImage size].width, (int)[outputImage size].height);

 return outputImage;

 }


 -(void)screenshot:(id)sender{

 [self glToUIImage];


 }

But it returns black image.

enter image description here

user3496826
  • 101
  • 2
  • 14
  • it's unclear from your post if you want to take a screenshot, or if you want to capture the raw images from the camera. glReadPixel simply returns the openGL renderBuffer, as it doesn't seem that you are using openGL, you're not going to get anything returned. For a screen capture, you can follow this thread, especially the last post http://stackoverflow.com/questions/2200736/how-to-take-a-screenshot-programmatically – MDB983 Apr 11 '14 at 15:51
  • I don't want screen capture. I need raw image (live camera) and overlay content like image. See the attached Augmented Reality image, i need to capture live camera and overlay content. – user3496826 Apr 11 '14 at 17:22
  • Here is a reference to the Apple Docs that explain how to create an image from the Camera Capture. https://developer.apple.com/library/ios/qa/qa1702/_index.html#//apple_ref/doc/uid/DTS40010192. The Apple developer "RosyWriter" sample app, gives an example or retrieving the image buffer and applying a tint to each pixel then displaying the modified pixel Buffer via openGL. Apples pARK example shows how you can overlay locations on a camera feed.There are a number of approaches you can take, although it might be simpler to use an AR SDK. Let me know if you need more info/help. – MDB983 Apr 14 '14 at 22:43
  • I'm not using any AR SDK for my app. I just used overlay image and capture. I referred from here http://www.musicalgeometry.com/?p=1681. But if i move the overlay image and take snapshot it won't getting current pixel of overlay image with live feed. It's gives always [overlay drawInRect:CGRectMake(30 * xScaleFactor, 100 * yScaleFactor, overlaySize.width * xScaleFactor, overlaySize.height * yScaleFactor)]; How to resolve this? This is the issue – user3496826 Apr 15 '14 at 07:45

1 Answers1

0

glReadPixels() won't work with an AV Foundation preview layer. There's no OpenGL ES context to capture pixels from, and even if there was, you'd need to capture from it before the scene was presented to the display.

If what you're trying to do is to capture an image overlaid on live video from the camera, my GPUImage framework could handle that for you. All you'd need to do would be to set up a GPUImageVideoCamera, a GPUImagePicture instance for what you needed to overlay, and a blend filter of some sort. You would then feed the output to a GPUImageView for display, and be able to capture still images from the blend filter at any point. The framework handles the heavy lifting for you with all of this.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571