1

I'm trying to integrate a custom camera view and following some slightly outdated code whilst doing so. I've had several errors but believed I've fixed them bar 2.

Here is the current code so far:

import Foundation
import AVFoundation
import UIKit

class setupView : UIViewController {

@IBOutlet var cameraView: UIView!
@IBOutlet var nameTextField: UITextField!

var captureSession = AVCaptureSession()
var stillImageOutput = AVCapturePhotoOutput()
var previewLayer = AVCaptureVideoPreviewLayer()

override func viewDidLoad() {

       let session = AVCaptureDeviceDiscoverySession.init(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .back)
    if let device = session?.devices[0] {

        if device.position == AVCaptureDevicePosition.back {

            do {

                let input = try AVCaptureDeviceInput(device: device )
                if captureSession.canAddInput(input){
                    captureSession.addInput(input)
                    stillImageOutput.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]

                    if captureSession.canAddOutput(stillImageOutput) {
                        captureSession.addOutput(stillImageOutput)
                        captureSession.startRunning()

                        previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
                        previewLayer.AVLayerVideoGravityResizeAspectFill
                        previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
                        cameraView.layer.addSublayer(previewLayer)

                        previewLayer.bounds = cameraView.frame
                        previewLayer.position = CGPoint(x: cameraView.frame.width / 2, y:cameraView.frame.height / 2)

                    }

                }

            }                catch {



            }
        }
    }

}

@IBAction func takePhoto(_ sender: Any) {
    }

@IBAction func submitAction(_ sender: Any) {

}
}

I'm currently getting 2 errors:

"Value of type AVCapturePhotoOutput" has no member "outputSettings"

"Value of type "AVCaptureVideoPreviewLayer" has no member "AVLayerVideoGravityResizeAspectFill"

Bluewings
  • 3,438
  • 3
  • 18
  • 31
Alex Ingram
  • 434
  • 4
  • 17
  • The error "Value of type AVCapturePhotoOutput" has no member "outputSettings" is removed if I revert to AVCaptureStillImageOutput instead however this has been depreciated. – Alex Ingram Jun 14 '17 at 01:09

2 Answers2

1

You are almost there. The problem is some of the AVFoundation classes are deprecated and there is more than one way to take a photo now. Here is the issues with your code.

"Value of type AVCapturePhotoOutput" has no member "outputSettings"

It is because actually AVCapturePhotoOutput don't have any member defined as outputSettings. Check out full documentation of AVCapturePhotoOutput

Actually outputSettings is member of AVCaptureStillImageOutput and the same is deprecated from iOS 10.0

"Value of type "AVCaptureVideoPreviewLayer" has no member "AVLayerVideoGravityResizeAspectFill"

Again the same mistake, as per your code there is no member for AVCaptureVideoPreviewLayer. In case if you want to set the video preview layer set it like below.

  previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill

As like you mentioned the code is outdated and its using deprecated AVCaptureStillImageOutput

If really want to use AVCapturePhotoOutput then you should follow the below steps.

These are the steps to capture a photo.

  • Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
  • Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
  • Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.

have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.

let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
                             kCVPixelBufferWidthKey as String: 160,
                             kCVPixelBufferHeightKey as String: 160,
                             ]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)

if you would like to know the different way to capturing photo from avfoundation check out my previous SO answer

Also Apple documentation explains very clear for How to use AVCapturePhotoOutput

Bluewings
  • 3,438
  • 3
  • 18
  • 31
0
 import AVFoundation
    import Foundation
    @IBOutlet weak var mainimage: UIImageView!
    let captureSession = AVCaptureSession()
    let stillImageOutput = AVCaptureStillImageOutput()
    var previewLayer : AVCaptureVideoPreviewLayer?
    var captureDevice : AVCaptureDevice?
        override func viewDidLoad() {
        super.viewDidLoad()
            captureSession.sessionPreset = AVCaptureSessionPresetHigh

            if let devices = AVCaptureDevice.devices() as? [AVCaptureDevice] {
                // Loop through all the capture devices on this phone
                for device in devices {
                    // Make sure this particular device supports video
                    if (device.hasMediaType(AVMediaTypeVideo)) {
                        // Finally check the position and confirm we've got the back camera
                        if(device.position == AVCaptureDevicePosition.front) {
                            captureDevice = device
                            if captureDevice != nil {
                                print("Capture device found")
                                beginSession()
                            }
                        }
                    }
                }
            }


    }



func beginSession() {

        do {
            try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
            stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]

            if captureSession.canAddOutput(stillImageOutput) {
                captureSession.addOutput(stillImageOutput)
            }

        }
        catch {
            print("error: \(error.localizedDescription)")
        }

        guard let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) else {
            print("no preview layer")
            return
        }

        self.view.layer.addSublayer(previewLayer)
        previewLayer.frame = self.view.layer.frame
        captureSession.startRunning()
        self.view.addSubview(mainimage)

    }

This code is working in my app

Amul4608
  • 1,390
  • 14
  • 30