6

I am trying to filter video in iPhone. Here's my program structure and source code:

AppDelegate.h
AppDelegate.m
ViewController.h
ViewController.m

The AppDelegate file is same as default. Here's my ViewController.

//ViewController.h

#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import <CoreVideo/CoreVideo.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreImage/CoreImage.h>
#import <ImageIO/ImageIO.h>

@interface ViewController : GLKViewController <AVCaptureVideoDataOutputSampleBufferDelegate>{
    AVCaptureSession *avCaptureSession;
    CIContext *coreImageContext;
    CIImage *maskImage;
    CGSize screenSize;
    CGContextRef cgContext;
    GLuint _renderBuffer;
    float scale;
}

@property (strong, nonatomic) EAGLContext *context;

-(void)setupCGContext;

@end

// ViewController.m
#import "ViewController.h"

@implementation ViewController

@synthesize context;

- (void)viewDidLoad
{
    [super viewDidLoad];
    self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    if (!self.context) {
        NSLog(@"Failed to create ES context");
    }

    GLKView *view = (GLKView *)self.view;
    view.context = self.context;
    view.drawableDepthFormat = GLKViewDrawableDepthFormat24;

    coreImageContext = [CIContext contextWithEAGLContext:self.context];

    glGenRenderbuffers(1, &_renderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);

    NSError *error;
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];

    [dataOutput setAlwaysDiscardsLateVideoFrames:YES]; 
    [dataOutput setVideoSettings:[NSDictionary  dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                                                              forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
    [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

    avCaptureSession = [[AVCaptureSession alloc] init];
    [avCaptureSession beginConfiguration];
    [avCaptureSession setSessionPreset:AVCaptureSessionPreset1280x720];
    [avCaptureSession addInput:input];
    [avCaptureSession addOutput:dataOutput];
    [avCaptureSession commitConfiguration];
    [avCaptureSession startRunning];

    [self setupCGContext];
    CGImageRef cgImg = CGBitmapContextCreateImage(cgContext);
    maskImage = [CIImage imageWithCGImage:cgImg];
    CGImageRelease(cgImg);
}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    image = [CIFilter   filterWithName:@"CISepiaTone" keysAndValues:kCIInputImageKey, 
                        image, @"inputIntensity", 
                        [NSNumber numberWithFloat:0.8], 
                        nil].outputImage;

    [coreImageContext drawImage:image atPoint:CGPointZero fromRect:[image extent] ];

    [self.context presentRenderbuffer:GL_RENDERBUFFER];
}

-(void)setupCGContext {
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * screenSize.width;
    NSUInteger bitsPerComponent = 8;
    cgContext = CGBitmapContextCreate(NULL, screenSize.width, screenSize.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);

    CGColorSpaceRelease(colorSpace);
}

The sepia filter works, but the video is little slower. When I don't apply filter, the video is normal. Any idea on how I can improve the video and make it faster?

Thanks.

rookieRailer
  • 2,313
  • 2
  • 28
  • 48
  • Perhaps there is computational work you can offload to a separate thread. You might read up on `NSThread`, `NSOperation` and blocks. – Alex Reynolds Jan 08 '12 at 13:59
  • does it make any difference, as i am filtering and showing the video on the screen, delegating the filtering task to another thread and then getting filtered output from that thread, and showing it on the screen, wouldn't it be same as doing the whole thing in the same thread? Using a background thread would be helpful if it wasn't real time I guess. Please suggest. Thanks. – rookieRailer Jan 08 '12 at 16:35
  • Threading would probably help on dual core devices. Do computation on a background thread, and UI updates on the main thread. Profile with a smaller version of your app, maybe. – Alex Reynolds Jan 08 '12 at 21:02

3 Answers3

11

As I describe here, the sepia filter in Core Image wasn't quite able to run in realtime, but other filters might. It depends on the hardware capabilities of the target device, as well as iOS version (Core Image has improved in performance significantly over the last several iOS versions).

However, if I may plug my open source framework again, GPUImage lets you do this much, much faster. It can apply a sepia tone filter on a 640x480 frame of video in 2.5 ms on an iPhone 4, which is more than fast enough for the 30 FPS video from that camera.

The following code will do a live filtering of video from the rear-mounted camera on an iOS device, displaying that video within a portrait-oriented view:

videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];

sepiaFilter = [[GPUImageSepiaFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];

[videoCamera addTarget:rotationFilter];
[rotationFilter addTarget:sepiaFilter];
filterView = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[self.view addSubview:filterView];
[sepiaFilter addTarget:filterView];

[videoCamera startCameraCapture];
Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
3

I realise this is an old question now, but...

[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

that line is making your video callback to be called on the main (UI) thread.

If you change it to something like:

[dataOutput setSampleBufferDelegate:self
                              queue:dispatch_queue_create("cQ", DISPATCH_QUEUE_SERIAL)];

Then in your callback if you need to update your UI you should do:

dispatch_async(dispatch_get_main_queue(), ^{
    [coreImageContext drawImage:image atPoint:CGPointZero fromRect:[image extent] ];
    [self.context presentRenderbuffer:GL_RENDERBUFFER];
});

That will help a lot as computationally expensive stuff will execute on a background thread, and the image drawing will not affect the capture.

Side Note:

Blindly using sample code you find on the internet without reading up on how the technology works is not a good way to develop applications (a lot of people are guilty of this)

Tony Million
  • 4,296
  • 24
  • 24
  • I also got a speed up when not using the main_queue. However, I found that neither DISPATCH_QUEUE_SERIAL nor DISPATCH_QUEUE_CONCURRENT were as fast as simple setting the dispatch_queue_attr_t for dispatch_queue_create to NULL. Oddly, the header for queue.h has this define:"#define DISPATCH_QUEUE_SERIAL NULL". So I don't get what's happening there. On an iPad 3/iOS 6, I get 10fps on the main queue, about 15fps using SERIAL, 20fps with CONCURRENT, and about 25fps with NULL capturing a 640x480 front facing video capture and doing some simple image processing on a frag shader. Wish it were faster! – Angus Forbes Aug 08 '13 at 06:16
  • 1
    you shouldn't use a concurrent queue as your frames may be processed out of order. – Tony Million Aug 08 '13 at 10:49
2

The following:

CIFilter   filterWithName:@"CISepiaTone" 

is called every time you get a buffer/frame. You only need to create the filter ONCE. So move this outside and you can still use the filter.

j0k
  • 22,600
  • 28
  • 79
  • 90
Kamal
  • 21
  • 1