21

I'm running into trouble trying to blur part of the screen in my iOS app. See image for better idea of what I'm trying to do.

BlurBox

Only the content of the "BlurBox" needs to be blurry but the rest can be clear. So if you were looking at table view, only the content underneath the BlurBox would be blurry (even as you scroll). The rest would look clear.

My first approach was to call UIGraphicsGetImageFromCurrentImageContext() every .01s to get all the layers under the BlurBox mushed into one image. Then blur that image and display it onto of everything.

The methods i've tried for blurring are:

https://github.com/tomsoft1/StackBluriOS

https://github.com/coryleach/UIImageAdjust

https://github.com/esilverberg/ios-image-filters

https://github.com/cmkilger/CKImageAdditions

[layer setRasterizationScale:0.25];
[layer setShouldRasterize:YES];

As well as a few custom attempts. I've also looked at Apple's GLImageProcessing but I think that it is a bit overkill for what I'm trying to do here.

The problem is that they are all to slow. The app is not going on the app store so I'm open to using any private/undocumented frameworks.

A kind of far out idea I had was to override the drawRect method of all the components I use (UITableViewCells, UITableView, etc) and blur each of them independently on the fly. However this would take some time, does this even sound like a viable option?


UPDATE:

I have tried to use CIFilters as follows:

CIImage *inputImage = [[CIImage alloc] initWithImage:[self screenshot]];

CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue: inputImage forKey: @"inputImage"];
[blurFilter setValue: [NSNumber numberWithFloat:10.0f]
                           forKey:@"inputRadius"];


CIImage *outputImage = [blurFilter valueForKey: @"outputImage"];

CIContext *context = [CIContext contextWithOptions:nil];

self.bluredImageView.image = [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];

This does work, however it is incredibly slow. :(

I am seeing that some implementations will blur only when I pass in an image loaded from disk. If I pass in a UIImage that I created from using UIGraphicsGetImageFromCurrentImageContext() it doesn't work. Any ideas on why this would be?


UPDATE:

I have tried patel's suggestion as follows:

CALayer *backgroundLayer = [CALayer layer];

CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
[blurFilter setDefaults];
backgroundLayer.backgroundFilters = [NSArray arrayWithObject:blurFilter];

[[self.view layer] addSublayer:backgroundLayer];

However, it doesn't work :(


UPDATE SINCE BOUNTY ADDED:

I have managed to get the BlurBox working correctly using TomSoft1's stackblur since he added the ability to normalize an image to RGBA format (32 bits/pixel) on the fly. However, it is still pretty slow.

I have a timer calling an update every 0.03s to grab the image of what's underneath the BlurBox, blur that image, and display it on screen. I need help on boosting the "fps" on the BlurBox.

random
  • 8,568
  • 12
  • 50
  • 85

5 Answers5

41

I would recommend Brad Larson's GPUImage which is fully backed by the GPU for a wide variety of image processing effects. It's very fast, and even fast enough that in his demo app he does real-time video processing from the camera and the frame-rate is excellent.

https://github.com/BradLarson/GPUImage

Here is a code snippet I wrote to apply a basic box blur which blurs the bottom and top thirds of the image but leaves the middle of the image un-blurred. His library is extremely extensive and contains almost every kind of image filter effect imaginable.

GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:[self screenshot]];

GPUImageTiltShiftFilter *boxBlur = [[GPUImageTiltShiftFilter alloc] init];
        boxBlur.blurSize = 0.5;

[stillImageSource addTarget:boxBlur];

[stillImageSource processImage];

UIImage *processedImage = [stillImageSource imageFromCurrentlyProcessedOutput];
Community
  • 1
  • 1
brynbodayle
  • 6,546
  • 2
  • 33
  • 49
  • 6
    I'd recommend using a GPUImageUIElement as an input for the item that needs to be blurred. You could modify the GPUImageGaussianSelectiveBlurFilter to cover a rectangular area, rather than a circular one, if you wanted to only blur a portion of a UI element. – Brad Larson Sep 09 '12 at 17:24
  • 1
    After speaking with my boss I'm afraid that I wont be able to use this awesome library that would make my life 10000x easier. We *can* use outside "classes" but not full libraries. Supposedly some legal crap. However, I think that I might be able to use Brad's code as a starting point for abstracting the parts that I need. This was extremely helpful and thank you. +150 for the suggestion and +2^56 to @BradLarson for open sourcing this awesome code. – random Sep 10 '12 at 18:03
  • Thanks, sorry you can't use it fully. @BradLarson just curious why use the GPUImageUIElement over the GPUImagePicture? – brynbodayle Sep 10 '12 at 18:10
  • Is there anyway you could post some code to get me going as to how to use GPUImageUIElement as in input directly. @BradLarson – random Sep 10 '12 at 19:13
  • @bbodayle - If your purpose was to selectively blur a portion of a UI element, that input uses a little faster means of capturing the raster bitmap from the UI element and uploading it. The picture route introduces some extra Core Graphics overhead from having to pull from a UIImage or CGImageRef. – Brad Larson Sep 10 '12 at 20:14
  • 13
    @cory - That seems like a bizarre limitation, given that there's no real difference licensing-wise between using a few classes and the full framework. I guess you could just pick out the classes you need from the framework, like GPUImageOutput, GPUImageUIElement, GPUImageFilter, and a modified GPUImageSelectiveBlurFilter (along with a couple of dependencies) and build those right into your application. You could avoid having to use the framework as a static library that way. I've not had anyone have a problem with the code before, since I don't care how people use it or make money off of it. – Brad Larson Sep 10 '12 at 20:21
  • @BradLarson It really is. I believe that it is more of his chauvinistic attitude that actual legal problems. I will pick it apart as a last resort, I don't like to destroy others' work like that. I took a peak at the parts I thought I would need :S I'm hoping it won't be has difficult as it looks. – random Sep 10 '12 at 21:15
  • 1
    @cory - The individual classes should be usable by themselves, as long as you pull in the appropriate superclasses and one or two supporting elements. You shouldn't need to extract anything from within them. There's a lot of scaffolding code in there that might be hard to pull apart if you're not familiar with OpenGL ES 2.0. – Brad Larson Sep 10 '12 at 21:19
  • @BradLarson Awesome, I'm not very familiar with OpenGL ES 2.0 which is why it looked so daunting. – random Sep 10 '12 at 22:37
  • @BradLarson Yes, an example of 'GPUImageUIElement' would be great please! – strange Apr 07 '14 at 13:18
8

Though it may be a bit late to respond, You can use Core Image filters. The reason it is so slow is this line.

CIContext *context = [CIContext contextWithOptions:nil];

In the Apple documents to get the best performance in Core Image they state firstly

"Don’t create a CIContext object every time you render. Contexts store a lot of state information; it’s more efficient to reuse them."

My personal solution to this is to make a Singleton for the Core Image Context. So I only ever create one.

My code is in this demo project on GitHub.

https://github.com/KyleLopez/DemoCoreImage

Feel free to use it, or find another solution to your liking. The slowest part I've found in CoreImage is the context, Image processing after that is really fast.

elp
  • 8,021
  • 7
  • 61
  • 120
Kyle
  • 104
  • 1
  • 2
  • Thank you for the reply, with iOS7 around the corner I think this question is almost obsolete but I looked at your project and it is very helpful. – random Aug 26 '13 at 18:47
  • 2
    I'm afraid that's not why it's slow. I've profiled this kind of blurring operation and rendering the blurred image takes 99.9% of the time. The time taken to make the context doesn't even show up. – w0mbat Sep 04 '14 at 19:49
  • According to apple doc, it seems that CoreImage context supports both CPU and GPU rendering. There is context option to disable software renderer (CPU), in return, forcing the context to use GPU. – sang Sep 06 '14 at 09:11
  • You should definitely be trying to recycle the CIContext - not doing so will be a huge performance overhead. The observation that @w0mbat made about the performance hit being in the blur and not in the context creation is misguided. This is very likely true, BUT the reason the blur is taking the time is that it will need to do a lot of setup (for example loading and compiling the shaders to do the blur). If you reuse the context the blur won't need to recreate the shaders and it should run a lot faster. – Gavin Maclean Mar 15 '16 at 09:39
1

I haven't tested this but I wonder if you could place a CALayer where you want the box to be blurred and then find a useful CIFilter that you can set on the CALayer's backgroundFilters. Just a thought.

See CALayer.backgroundFilters

epatel
  • 45,805
  • 17
  • 110
  • 144
0

You might try the Apple Core Image Filter (CIFilter) set of routines. They require iOS 5, so you might not be able to use them.

I am not sure if it is faster then the methods you have tried, but I have used it in projects in the past, and it works really well. If you can grab the part of the screen you want to make blurry, put that into an image, pass it through a filter, and then re-display it at the appropriate place on the screen, that should work.

I used the filters to change the colors of an image in real-time, and it worked well.

http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIFilter_Class/Reference/Reference.html

PaulPerry
  • 906
  • 5
  • 14
  • Would you mind posting a bit of code on how you did it. I tried for hours yesterday and could never get it to work with CIFilters. What I've noticed with some of the methods I tried, it would ONLY blur when I was passing an image that I loaded from disk. If I tried to pass it one I grabbed with UIGraphicsGet.. it wouldn't work. – random Sep 04 '12 at 20:56
  • I have updated my question to show how I am using CIFilter. Could you check for correctness? – random Sep 04 '12 at 21:25
  • Looks like you already got the CIFilter working with GIGaussianBlur (from Update above), and it is too slow. I used a different filter, which won't help here, sorry. – PaulPerry Sep 05 '12 at 15:40
0

Have you tried this library:

https://github.com/gdawg/uiimage-dsp

It uses the vDSP/Accelerate framework and seems easy to use.

BTW: 0.01s seems far too quick. 0.03 should do as well.

Codo
  • 75,595
  • 17
  • 168
  • 206
  • If I bring it up it is still behind on the refresh rate. Yes, I have tried gdawg's library. The problem is that it wouldn't blur an image that I grabbed with UIGraphicsGet.. It would ONLY blur an image that I loaded from disk. Any idea why that would be? – random Sep 04 '12 at 20:58
  • @random UIImage are either backed by a CIImage or a CGImage: but gdawg's lib was only supporting CGImage (and same thing for [my current Swift port](https://github.com/Coeur/ImageEffects)). But you can convert a CIImage to a CGImage if you wish to. Or you may consider using [my `renderImage` or my `cropped`](https://github.com/Coeur/ImageEffects/blob/master/SwiftImageEffects/ImageEffects%2Bextensions.swift) to keep a CGImage in the first place. – Cœur Feb 14 '19 at 05:40