11

I want to implement custom camera into my app. So, I am creating this camera using AVCaptureDevice.

Now I want to show only Gray Output into my custom camera. So I am trying to getting this using setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains: and AVCaptureWhiteBalanceGains. I am using AVCamManual: Extending AVCam to Use Manual Capture for this.

- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
    NSError *error = nil;

    if ( [videoDevice lockForConfiguration:&error] ) {
        AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits
        [videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
        [videoDevice unlockForConfiguration];
    }
    else {
        NSLog( @"Could not lock device for configuration: %@", error );
    }
}

But for that, I must have to pass RGB gain values between 1 to 4. So I am creating this method for checking MAX and MIN values.

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
    AVCaptureWhiteBalanceGains g = gains;

    g.redGain = MAX( 1.0, g.redGain );
    g.greenGain = MAX( 1.0, g.greenGain );
    g.blueGain = MAX( 1.0, g.blueGain );

    g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
    g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain );
    g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );

    return g;
}

Also I am trying to get different effects like passing RGB gain static values.

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
    AVCaptureWhiteBalanceGains g = gains;
    g.redGain = 3;
    g.greenGain = 2;
    g.blueGain = 1;
    return g;
}

Now, I want to set this gray scale (Formula: Pixel = 0.30078125f * R + 0.5859375f * G + 0.11328125f * B) on my custom camera. I have tried this for this formula.

- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
    AVCaptureWhiteBalanceGains g = gains;

    g.redGain = g.redGain * 0.30078125;
    g.greenGain = g.greenGain * 0.5859375;
    g.blueGain = g.blueGain * 0.11328125;

    float grayScale = g.redGain + g.greenGain + g.blueGain;

    g.redGain = MAX( 1.0, grayScale );
    g.greenGain = MAX( 1.0, grayScale );
    g.blueGain = MAX( 1.0, grayScale );

    g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
    g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain);
    g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );

    return g;
}

So How can I pass this value in between 1 to 4..?

Is there any way or scale to compare this things..?

Any Help would be appreciated.

Meet Doshi
  • 4,241
  • 10
  • 40
  • 81
  • 2
    Adjusting the white balance will not transform a color image into a black and white image. You need to find a different API in order to do that. For example [vImageMatrixMultiply_ARGB8888](https://developer.apple.com/library/ios/documentation/Performance/Reference/vImage_transform/index.html#//apple_ref/c/func/vImageMatrixMultiply_ARGB8888) – Mats Jul 06 '16 at 14:20
  • @Mats : Yeah thanks..!! Please provide any sample code for better understanding. – Meet Doshi Jul 07 '16 at 05:27
  • 1
    Maybe this, http://stackoverflow.com/questions/21207099/, question helps. – Mats Jul 07 '16 at 05:46
  • Thanks @Mats. But Still I am finding the solution. This link can not able to help to solved out this. Is there any other solutions are there.? – Meet Doshi Jul 12 '16 at 13:53

1 Answers1

6

CoreImage provides a host of filters for adjusting images using the GPU, and can be used efficiently with video data, either from a camera feed, or a video file.

There is an article on objc.io showing how to do this. The examples are in Objective-C but the explanation should clear enough to follow.

The basic steps are:

  1. Create an EAGLContext, configured to use OpenGLES2.
  2. Create a GLKView to display the rendered output, using the EAGLContext.
  3. Create a CIContext, using the same EAGLContext.
  4. Create a CIFilter using a CIColorMonochrome CoreImage filter.
  5. Create an AVCaptureSession with an AVCaptureVideoDataOutput.
  6. In the AVCaptureVideoDataOutputDelegate method, convert the CMSampleBuffer to a CIImage. Apply the CIFilter to the image. Draw the filtered image to the CIImageContext.

This pipeline ensures that the video pixel buffers stay on the GPU (from camera to display), and avoids moving data to the CPU, to maintain realtime performance.

To save the filtered video, implement an AVAssetWriter, and append the sample buffer in the same AVCaptureVideoDataOutputDelegate where the filtering is done.

Here is an example in Swift.

Example on GitHub.

import UIKit
import GLKit
import AVFoundation

private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5))

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

    private var context: CIContext!
    private var targetRect: CGRect!
    private var session: AVCaptureSession!
    private var filter: CIFilter!

    @IBOutlet var glView: GLKView!

    override func prefersStatusBarHidden() -> Bool {
        return true
    }

    override func viewDidAppear(animated: Bool) {
        super.viewDidAppear(animated)

        let whiteColor = CIColor(
            red: 1.0,
            green: 1.0,
            blue: 1.0
        )

        filter = CIFilter(
            name: "CIColorMonochrome",
            withInputParameters: [
                "inputColor" : whiteColor,
                "inputIntensity" : 1.0
            ]
        )

        // GL context

        let glContext = EAGLContext(
            API: .OpenGLES2
        )

        glView.context = glContext
        glView.enableSetNeedsDisplay = false

        context = CIContext(
            EAGLContext: glContext,
            options: [
                kCIContextOutputColorSpace: NSNull(),
                kCIContextWorkingColorSpace: NSNull(),
            ]
        )

        let screenSize = UIScreen.mainScreen().bounds.size
        let screenScale = UIScreen.mainScreen().scale

        targetRect = CGRect(
            x: 0,
            y: 0,
            width: screenSize.width * screenScale,
            height: screenSize.height * screenScale
        )

        // Setup capture session.

        let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

        let videoInput = try? AVCaptureDeviceInput(
            device: cameraDevice
        )

        let videoOutput = AVCaptureVideoDataOutput()
        videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue())

        session = AVCaptureSession()
        session.beginConfiguration()
        session.addInput(videoInput)
        session.addOutput(videoOutput)
        session.commitConfiguration()
        session.startRunning()
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
            return
        }

        let originalImage = CIImage(
            CVPixelBuffer: pixelBuffer,
            options: [
                kCIImageColorSpace: NSNull()
            ]
        )

        let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform)

        filter.setValue(rotatedImage, forKey: kCIInputImageKey)

        guard let filteredImage = filter.outputImage else {
            return
        }

        context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent)

        glView.display()
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
        let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
        print("dropped sample buffer: \(seconds)")
    }
}
Luke Van In
  • 5,215
  • 2
  • 24
  • 45
  • Yesss Perfect solutions. Thanks Luke. :) I have implemented this into my app. But sometimes it is crashing on `glView.display()` line. – Meet Doshi Jul 20 '16 at 11:09
  • How can I capture image using GLKView view.? – Meet Doshi Jul 20 '16 at 11:10
  • 1
    The crashes may be caused by modifying the filter or context on different threads. A safe way to work around this is to perform all work on the main thread (I have updated the example to show this). Just take care not to use a resource intensive filter (e.g. blur), or do too much extra work on the main thread. In practice you probably want to use multiple threads to avoid blocking the main thread, although this is a complex topic. Look at the Apple documentation for multithreading with OpenGL if interested. – Luke Van In Jul 23 '16 at 11:45
  • Thanks @luke. Also this code it not working properly in iOS 8 as well. I think there might be CIFilter or context issue. – Meet Doshi Jul 23 '16 at 11:58
  • What errors are you getting? Do you see a runtime error message? – Luke Van In Jul 23 '16 at 12:11
  • In iOS8, I am getting this error. "BSXPCMessage received error for message: Connection interrupted". I have found some stuff regarding this. but did not get any helpful answer. – Meet Doshi Jul 23 '16 at 12:16
  • 1
    That seems to be a known bug in iOS8. A possible workaround is to disable GPU rendering, and use the CPU instead, by adding `kCIContextUseSoftwareRenderer: NSNumber(booleanLiteral: true)`. Another possible solution us to use a CoreGraphics `CIContext` backed by `CGContext`. You would then need to draw the `CIImage` to a `CGImage`, then display the image in a `UIImageView` or `CALayer`. Performance will probably not be good though. [Reference on SO](http://stackoverflow.com/questions/26065808/bsxpcmessage-received-error-for-message-connection-interrupted). – Luke Van In Jul 23 '16 at 12:47
  • Thanks @Luke. I want to add over saturation effect on camera. "Oversaturation hint" should be calculated according to formula provided. (convert overexposure pixel (R,G,B >= 255) to yellow (RGB 255,255,0)). You can see image from this URL. ( https://www.dropbox.com/s/h052hm2c615bu2f/OverSaturationEffect.png?dl=0 ). So How can I do this.? Using inputIntensity.? – Meet Doshi Jul 28 '16 at 10:06
  • or I have to use other CIFilter name instead CIColorMonochrome.? – Meet Doshi Jul 28 '16 at 10:15
  • You would need a different filter. `CIColorCube` lets you remap the entire colour space: [Example showing how to use CIColorCube](https://developer.apple.com/library/tvos/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_filer_recipes/ci_filter_recipes.html#//apple_ref/doc/uid/TP30001185-CH4-SW2). Other filters, such as `CIColorMap` may be easier to use but offer less control. See the [CIFilter documentation](https://developer.apple.com/library/tvos/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html) – Luke Van In Jul 28 '16 at 10:43
  • GLKView is cutting into the iPhone 6 or 6s plus devices. Is there any solution for this.? – Meet Doshi Sep 22 '16 at 10:36
  • @MeetDoshi I don't understand what you are asking. – Luke Van In Oct 02 '16 at 17:33