My current setup is as follows (based on the ColorTrackingCamera project from Brad Larson):
I'm using a AVCaptureSession
set to AVCaptureSessionPreset640x480
for which I let the output run through an OpenGL scene as a texture. This texture is then manipulated by a fragment shader.
I'm in need of this "lower quality" preset because I want to preserve a high framerate when the user is previewing. I then want to switch to a higher quality output when the user captures a still photo.
First I thought I could change the sessionPreset
on the AVCaptureSession
but this forces the camera to refocus which break usability.
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
[captureSession commitConfiguration];
Currently I'm trying to add a second AVCaptureStillImageOutput
to the AVCaptureSession but I'm getting an empty pixelbuffer, so I think I'm kinda stuck.
Here's my session setup code:
...
// Add the video frame output
[captureSession beginConfiguration];
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([captureSession canAddOutput:videoOutput])
{
[captureSession addOutput:videoOutput];
}
else
{
NSLog(@"Couldn't add video output");
}
[captureSession commitConfiguration];
// Add still output
[captureSession beginConfiguration];
stillOutput = [[AVCaptureStillImageOutput alloc] init];
if([captureSession canAddOutput:stillOutput])
{
[captureSession addOutput:stillOutput];
}
else
{
NSLog(@"Couldn't add still output");
}
[captureSession commitConfiguration];
// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if(![captureSession isRunning])
{
[captureSession startRunning];
};
...
And here is my capture method:
- (void)prepareForHighResolutionOutput
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
NSLog(@"%i x %i", width, height);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}];
}
(width
and height
turn out to be 0)
I've read through the documents of the AVFoundation documentation but it seems I'm not getting something essential.