I have been looking into this for way too long now.
I am trying to get MacOS webcam data and run CIDetect on the frames that the webcam outputs.
I know I need to:
connect
AVCaptureDevice
(as in input to) intoAVCaptureSession
connect
AVCaptureVideoDataOutput
(as an output to) intoAVCaptureSession
call
.setSampleBufferDelegate(AVCaptureVideoDataOutputSampleBufferDelegate, DelegateQueue)
For some reason, after calling .setSampleBufferDelegate(...)
(and of course after calling .startRunning()
on the AVCaptureSession
instance), my AVCaptureVideoDataOutputSampleBufferDelegate
's captureOutput
is not being called.
I found so many people having trouble with this online, but I was not able to find any solution.
It seems to me like it has to do with the DispatchQueue
.
MyDelegate.swift
:
class MyDelegate : NSObject {
var context: CIContext?;
var detector : CIDetector?;
override init() {
context = CIContext();
detector = CIDetector(ofType: CIDetectorTypeFace, context: context);
print("set up!");
}
}
extension MyDelegate : AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection) {
print("success?");
var pixelBuffer : CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!;
var image : CIImage = CIImage(cvPixelBuffer: pixelBuffer);
var features : [CIFeature] = detector!.features(in: image);
for feature in features {
print(feature.type);
print(feature.bounds);
}
}
func captureOutput(_ : AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection) {
print("fail?");
}
}
ViewController.swift
:
var captureSession : AVCaptureSession;
var captureDevice : AVCaptureDevice?
var previewLayer : AVCaptureVideoPreviewLayer?
var vdo : AVCaptureVideoDataOutput;
var videoDataOutputQueue : DispatchQueue;
override func viewDidLoad() {
super.viewDidLoad()
camera.layer = CALayer()
// Do any additional setup after loading the view, typically from a nib.
captureSession.sessionPreset = AVCaptureSessionPresetLow
// Get all audio and video devices on this machine
let devices = AVCaptureDevice.devices()
// Find the FaceTime HD camera object
for device in devices! {
print(device)
// Camera object found and assign it to captureDevice
if ((device as AnyObject).hasMediaType(AVMediaTypeVideo)) {
print(device)
captureDevice = device as? AVCaptureDevice
}
}
if captureDevice != nil {
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
// vdo : AVCaptureVideoDataOutput;
vdo.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: NSNumber(value: kCVPixelFormatType_32BGRA)]
try captureDevice!.lockForConfiguration()
captureDevice!.activeVideoMinFrameDuration = CMTimeMake(1, 30)
captureDevice!.unlockForConfiguration()
videoDataOutputQueue.sync{
vdo.setSampleBufferDelegate(
MyDelegate,
queue: videoDataOutputQueue
);
vdo.alwaysDiscardsLateVideoFrames = true
captureSession.addOutput(vdo)
captureSession.startRunning();
}
} catch {
print(AVCaptureSessionErrorKey.description)
}
}
All of the necessary variables inside viewDidLoad
relating to AVFoundation
have been instantiated inside the Viewcontroller
's init()
. I've omitted that for clarity.
Any ideas?
Thanks, SO!
Kovek
EDIT:
- Fixed setting delegate from self
to MyDelegate
.
And this is how I initialize videoDataOutputQueue
:
videoDataOutputQueue = DispatchQueue(
label: "VideoDataOutputQueue"
);