35

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.

I can see the video so I know it's working.

However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.

If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.

Is there another (better) way of doing this?

Fogmeister
  • 76,236
  • 42
  • 207
  • 306

5 Answers5

69

I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:

1. CAReplicatorLayer

The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."

This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.

Here is some sample code to replicate a CACaptureVideoPreviewLayer:

Init AVCaptureVideoPreviewLayer

AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];

Init CAReplicatorLayer and set properties

Note: This will replicate the live preview layer four times.

NSUInteger replicatorInstances = 4;

CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);

Add Layers

Note: From my experience you need to add the layer you want to replicate to the CAReplicatorLayer as a sublayer.

[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];

Downsides

A downside to using CAReplicatorLayer is that it handles all placement of the layer replications. So it will apply any set transformations to each instance and and it will all be contained within itself. E.g. There would be no way to have a replication of a AVCaptureVideoPreviewLayer on two separate cells.


2. Manually Rendering SampleBuffer

This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.

Note: There might be other ways to render the SampleBuffer but I chose OpenGL because of its performance. Code was inspired and altered from CIFunHouse.

Here is how I implemented it:

2.1 Contexts and Session

Setup OpenGL and CoreImage Context

_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
                                       options:@{kCIContextWorkingColorSpace : [NSNull null]}];

Dispatch Queue

This queue will be used for the session and delegate.

self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);

Init your AVSession & AVCaptureVideoDataOutput

Note: I have removed all device capability checks to make this more readable.

dispatch_async(self.captureSessionQueue, ^(void) {
    NSError *error = nil;

    // get the input device and also validate the settings
    NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

    AVCaptureDevice *_videoDevice = nil;
    if (!_videoDevice) {
        _videoDevice = [videoDevices objectAtIndex:0];
    }

    // obtain device input
    AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];

    // obtain the preset and validate the preset
    NSString *preset = AVCaptureSessionPresetMedium;

    // CoreImage wants BGRA pixel format
    NSDictionary *outputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};

    // create the capture session
    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = preset;
    :

Note: The following code is the 'magic code'. It is where we are create and add a DataOutput to the AVSession so we can intercept the camera frames using the delegate. This is the breakthrough I needed to figure out how to solve the problem.

    :
    // create and configure video data output
    AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    videoDataOutput.videoSettings = outputSettings;
    [videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];

    // begin configure capture session
    [self.captureSession beginConfiguration];

    // connect the video device input and video data and still image outputs
    [self.captureSession addInput:videoDeviceInput];
    [self.captureSession addOutput:videoDataOutput];

    [self.captureSession commitConfiguration];

    // then start everything
    [self.captureSession startRunning];
});

2.2 OpenGL Views

We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.

self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;

Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)

self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;    
[self addSubview: self.livePreviewView];

Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.

[self.livePreviewView bindDrawable];

In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.

_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;

dispatch_async(dispatch_get_main_queue(), ^(void) {
    CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);        

    // *Horizontally flip here, if using front camera.*

    self.livePreviewView.transform = transform;
    self.livePreviewView.frame = self.bounds;
});

Note: If you are using the front camera you can horizontally flip the live preview like this:

transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));

2.3 Delegate Implementation

After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;
    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;

You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.

    for(CustomLivePreviewCell *cell in self.livePreviewCells) {
        CGFloat previewAspect = cell.videoPreviewViewBounds.size.width  / cell.videoPreviewViewBounds.size.height;

        // To maintain the aspect radio of the screen size, we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }

        [cell.livePreviewView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        if (sourceImage) {
            [_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
        }

        [cell.livePreviewView display];
    }
}

This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.

3. Sample Code

Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds

Johnny
  • 3,047
  • 1
  • 22
  • 18
  • 1
    @souvickcse I hope it helps! :) – Johnny Aug 07 '14 at 17:11
  • After hours of searching, finally found this. Thanks. – Akshit Zaveri Apr 01 '15 at 05:25
  • @Johnny This really helped. But i want to display 1 full screen camera view and one at the right top corner (100,100). I believe that i can achieve it using OpenGL, but not successful yet. Any ideas? – Akshit Zaveri Apr 01 '15 at 10:43
  • @Akshit Zaveri i had same problem if you get any solution please share it. thanks – Chirag D jinjuwadiya Oct 24 '15 at 06:29
  • No mate. No solution yet. @ChiragDjinjuwadiya – Akshit Zaveri Oct 29 '15 at 10:37
  • 2
    Add [cell.livePreviewView deleteDrawable] at the begining of for loop in last section of code and than you can have multiple sized live previews. @AkshitZaveri – NFilip Dec 29 '15 at 15:27
  • @NFilip thanks. i am no longer working on this app. but i will try in free time. :) – Akshit Zaveri Dec 29 '15 at 15:29
  • Can this be implemented to use both the frond and the rear camera? – Objectif Jan 16 '17 at 15:07
  • @Johnny hey I found your answer really helpful. I tried implementing it, but not able to achieve results. I have posted a question here https://stackoverflow.com/questions/48776151/output-camera-feed-to-2-uiviews I will really appreciate if you help me solve the issue. Cheers – Aakash Dave Feb 19 '18 at 07:47
  • Transforms and stuff can be completely avoided by just applying `ciImage.oriented(forExifOrientation: exifOrientation)` before you display the `CIImage`. –  Mar 21 '18 at 14:09
8

implement the AVCaptureSession delegate method which is

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

using this you can get the sample buffer output of each and every video frame. Using the buffer output you can create an image using the method below.

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
      UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

so you can add several imageViews to your view and add these lines inside the delegate method that i have mentioned before:

UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;
Ushan87
  • 1,608
  • 8
  • 15
  • Awesome! Thanks! I didn't use this exact method. I already had multiple views showing the output using the delegate method. Your post helped me realise that I could use this method and have a still image output set up as well for taking photos. So now I have one session with a delegate which posts a notification to update the "previews" and an output for taking full res photos :D Awesome! Thanks. – Fogmeister May 14 '13 at 13:09
  • 1
    Actually the method is not a delegate method from AVCaptureSession, it is a delegate from the AVCaptureVideoDataOutputSampleBufferDelegate. I had to add these lines to get this to work: `AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()]; [_captureSession addOutput:captureOutput];` – Raphael Jul 29 '14 at 15:21
4

Simply set the contents of the preview layer to another CALayer:

CGImageRef cgImage = (__bridge CGImage)self.previewLayer.contents; self.duplicateLayer.contents = (__bridge id)cgImage;

You can do this with the contents of any Metal or OpenGL layer. There was no increase in memory usage or CPU load on my end, either. You're not duplicating anything but a tiny pointer. That's not so with these other "solutions."

I have a sample project that you can download that displays 20 preview layers at the same time from a single camera feed. Each layer has a different effect applied to our.

You can watch a video of the app running, as well as download the source code at:

https://demonicactivity.blogspot.com/2017/05/developer-iphone-video-camera-wall.html?m=1

James Bush
  • 1,485
  • 14
  • 19
  • Is this technique possible in Swift, do you know? – Michael Forrest Oct 19 '20 at 12:58
  • @MichaelForrest You can; but, you don’t need to rewrite this in Swift, even if the rest of your app is written in Swift. “Swift is completely compatible with Objective-C, so developers can interface between the two languages, create mixed-language apps, and take advantage of Cocoa Touch classes with Swift, and Swift classes with Objective-C.” https://www.upwork.com/resources/swift-vs-objective-c-a-look-at-ios-programming-languages – James Bush Oct 21 '20 at 13:01
1

Working in Swift 5 on iOS 13, I implemented a somewhat simpler version of the answer by @Ushan87. For testing purposes, I dragged a new, small UIImageView on top of my existing AVCaptureVideoPreviewLayer. In the ViewController for that window, I added an IBOutlet for the new view and a variable to describe the correct orientation for the camera being used:

    @IBOutlet var testView: UIImageView!
    private var extOrientation: UIImage.Orientation = .up

I then implemented the AVCaptureVideoDataOutputSampleBufferDelegate as follows:

// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

        let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
        let image : UIImage = self.convert(cmage: ciimage)

        DispatchQueue.main.sync(execute: {() -> Void in
            testView.image = image
        })

    }

    // Convert CIImage to CGImage
    func convert(cmage:CIImage) -> UIImage
    {
        let context:CIContext = CIContext.init(options: nil)
        let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
        let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
        return image
    }

For my purposes, the performance was fine. I did not notice any lagginess in the new view.

Melissa
  • 168
  • 1
  • 7
-1

You can't have multiple previews. Only one output stream as the Apple AVFundation says. I've tried many ways but you just can't.

Shamanoid
  • 23
  • 5
  • +1 for being correct but not accepted as there is a way around it using the data delegate methods of the session. – Fogmeister May 14 '13 at 13:21
  • 1
    While it might be true you can't have multiple AVCaptureVideoPreviewLayers or AVCaptureSessions, you can use the AVCaptureAudioDataOutputSampleBufferDelegate to manipulate the sample buffer how you choose. – Johnny Aug 06 '14 at 21:23