0

Is there a way to display camera images without using AVCaptureVideoPreviewLayer? I want to do screen capture, but I can not do it.

    session = AVCaptureSession()

    camera = AVCaptureDevice.default(
        AVCaptureDevice.DeviceType.builtInWideAngleCamera,
        for: AVMediaType.video,
        position: .front) // position: .front
    do {
        input = try AVCaptureDeviceInput(device: camera)

    } catch let error as NSError {
        print(error)
    }

    if(session.canAddInput(input)) {
        session.addInput(input)
    }

    let previewLayer =  AVCaptureVideoPreviewLayer(session: session)
    cameraView.backgroundColor = UIColor.red
    previewLayer.frame = cameraView.bounds
    previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect

    cameraview.layer.addSublayer(previewLayer)
    session.startRunning()

I am currently trying to broadcast a screen capture. It is to synthesize the camera image and some UIView. However, if you use AVCaptureVideoPreviewLayer screen capture can not be done and the camera image is not displayed. Therefore, I want to display the camera image so that screen capture can be performed.

Zee
  • 1,865
  • 21
  • 42
user9260589
  • 21
  • 1
  • 3
  • Yes, there are a few ways actually. But what exactly is your aim here? "I want to do screen capture, but I can not do it" looks pretty important so it would be best if you rephrase the whole thing and ask your question directly. Include what is expected result and what is your current result please. – Matic Oblak Mar 23 '18 at 06:43
  • @ Matic Oblak thank you! I am currently trying to broadcast a screen capture. It is to synthesize the camera image and some UIView. However, if you use AVCaptureVideoPreviewLayer screen capture can not be done and the camera image is not displayed. Therefore, I want to display the camera image so that screen capture can be performed. – user9260589 Mar 23 '18 at 06:57

1 Answers1

0

Generally the views that are displayed using GPU directly may not be redrawn on the CPU. This includes situations like openGL content or these preview layers.

The "screen capture" redraws the screen on a new context on CPU which obviously ignores the GPU part.

You should try and play around with adding some outputs on the session which will give you images or rather CMSampleBuffer shots which may be used to generate the image.

There are plenty ways in doing this but you will most likely need to go a step lower. You can add output to your session to receive samples directly. Doing this is a bit of a code so please refer to some other posts like this one. The point in this you will have a didOutputSampleBuffer method which will feed you CMSampleBufferRef objects that may be used to construct pretty much anything in terms of images.

Now in your case I assume you will be aiming to get UIImage from sample buffer. To do so you may again need a bit of code so refer to some other post like this one.

To put it all together you could as well simply use an image view and drop the preview layer. As you get the sample buffer you can create image and update your image view. I am not sure what the performance of this would be but I discourage you on doing this. If image itself is enough for your case then you don't need a view snapshot at all.

But IF you do:

On snapshot create this image. Then overlay your preview layer with and image view that is showing this generated image (add a subview). Then create the snapshot and remove the image view all in a single chunk:

func snapshot() -> UIImage? {
    let imageView = UIImageView(frame: self.previewPanelView.bounds)
    imageView.image = self.imageFromLatestSampleBuffer()
    imageView.contentMode = .aspectFill // Not sure
    self.previewPanelView.addSubview(imageView)

    let image = createSnapshot()

    imageView.removeFromSuperview()

    return image
}

Let us know how things turn and you tried, what did or did not work.

Matic Oblak
  • 16,318
  • 3
  • 24
  • 43