4

I have a little problem with my pixellation image processing algorithm.

I load the image from the beginning into an array of type unsigned char* After that, when needed, I modify this data and have to update the image. This updating takes too long. This is how I am doing it:

CGDataProviderRef dataProvider = CGProviderCrateWithData(.....);
CGImageRef cgImage = CGImageCreate(....);
[imageView setImage:[UIImage imageWithCGImage:cgImage]]];

Everything is working but it's very slow to process a large image. I tried running this on a background thread, but that didn't help.

So basically, this takes too long. Does anyone have any idea how to improve it?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
George
  • 4,029
  • 1
  • 23
  • 31
  • Threads are likely to kill your performance since it's a compute-bound sort of thing with a single core. Can't say much more than that. – Xorlev Feb 19 '11 at 05:45

6 Answers6

16

As others have suggested, you'll want to offload this work from the CPU to the GPU in order to have any kind of decent processing performance on these mobile devices.

To that end, I've created an open source framework for iOS called GPUImage that makes it relatively simple to do this kind of accelerated image processing. It does require OpenGL ES 2.0 support, but every iOS device sold for the last couple of years has this (stats show something like 97% of all iOS devices in the field do).

As part of that framework, one of the initial filters I've bundled is a pixellation one. The SimpleVideoFilter sample application shows how to use this, with a slider that controls the pixel width in the processed image:

Screenshot of pixellation filter application

This filter is the result of a fragment shader with the following GLSL code:

 varying highp vec2 textureCoordinate;
 uniform sampler2D inputImageTexture;
 uniform highp fractionalWidthOfPixel;

 void main()
 {
    highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel);

    highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor);
    gl_FragColor = texture2D(inputImageTexture, samplePos );
 }

In my benchmarks, GPU-based filters like this perform 6-24X faster than equivalent CPU-bound processing routines for images and video on iOS. The above-linked framework should be reasonably easy to incorporate in an application, and the source code is freely available for you to customize however you see fit.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • BL, regarding this all-time classic answer: a common need is to pixellate just an area. (Typically: (i) a face (ii) a car license plate (iii) a logo on a t-shirt or hat.) Perhaps you could include how to pixellate only a given rectangle in an image, cheers. – Fattie Dec 14 '13 at 06:37
  • 1
    @JoeBlow - There are two ways to approach this: render a quad with the original unfiltered image, then render a second, smaller quad where the above filtering is used, or create a fragment shader that uses a conditional statement of some sort to limit the filtered area. The former will be much faster than the latter, but will require a second rendering pass with a second set of vertices and texture coordinates. – Brad Larson Dec 17 '13 at 18:31
  • You know, I have a very quick question (surprisingly I've not been able to search the answer). If one is only using say one of the filters in GPUImage, in fact can one **build GPUImage with just that one class** (ie, delete the other 120+ filter classes)? Or is it impractical, or pointless? It would seem to be more efficient if one is only using the one filter. Thanks as always! – Fattie Jan 02 '14 at 11:35
  • @JoeBlow - You need some of the base classes, and there are dependencies between some of the filter classes, but you can generally strip out a number of the others if you don't need them. Some of the filters look simple, but actually contain multiple filters within them, so there might be more dependencies than you suspect for certain filters. You also need to edit the core project itself, which could put you out of sync with the repository as I update it. I know people who have done more extreme streamlining of the code in their projects, even tearing apart some of the base classes. – Brad Larson Jan 02 '14 at 16:24
  • thanks as always! it sounds like, really, the short answer here is for KISS just stick to the whole project. Thanks again – Fattie Jan 02 '14 at 17:17
  • The SimpleVideoFilter seems to us the Sepia filter. Which one is the pixellation version? – easythrees Jan 04 '14 at 01:28
4

How about the use the Core Image filter named CIPixellate? Here is a code snippet of how i implemented it. You can play with kCIInputScaleKey to get the intensity you want:

// initialize context and image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation([UIImage imageNamed:@"test"])];

// set filter and properties
CIFilter *filter = [CIFilter filterWithName:@"CIPixellate"];
[filter setValue:logo forKey:kCIInputImageKey];
[filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150
[filter setValue:[NSNumber numberWithDouble:100.0] forKey:kCIInputScaleKey]; // default: 8.0

// render image
CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey];
CGRect extent = result.extent;
CGImageRef cgImage = [context createCGImage:result fromRect:extent];

// result
UIImage *image = [[UIImage alloc] initWithCGImage:cgImage];

Here is the official Apple Filter Tutorial and a List of available Filters.

Update #1

I just wrote a method to execute the rendering work in background:

- (void) pixelateImage:(UIImage *) image withIntensity:(NSNumber *) intensity completionHander:(void (^)(UIImage *pixelatedImage)) handler {

    // async task
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

        // initialize context and image
        CIContext *context = [CIContext contextWithOptions:nil];
        CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation(image)];

        // set filter and properties
        CIFilter *filter = [CIFilter filterWithName:@"CIPixellate"];
        [filter setValue:logo forKey:kCIInputImageKey];
        [filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150
        [filter setValue:intensity forKey:kCIInputScaleKey]; // default: 8.0

        // render image
        CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey];
        CGRect extent = result.extent;
        CGImageRef cgImage = [context createCGImage:result fromRect:extent];

        // result
        UIImage *image = [[UIImage alloc] initWithCGImage:cgImage];

        // dispatch to main thread
        dispatch_async(dispatch_get_main_queue(), ^{
            handler(image);
        });
    });
}

Call it like this:

[self pixelateImage:[UIImage imageNamed:@"test"] withIntensity:[NSNumber numberWithDouble:100.0] completionHander:^(UIImage *pixelatedImage) {
    self.logoImageView.image = pixelatedImage;
}];
Kai Burghardt
  • 1,493
  • 16
  • 16
1

Converted @Kai Burghardt's answer to Swift 3

func pixelateImage(_ image: UIImage, withIntensity intensity: Int) -> UIImage {

    // initialize context and image
        let context = CIContext(options: nil)
        let logo = CIImage(data: UIImagePNGRepresentation(image)!)!
        // set filter and properties
        let filter = CIFilter(name: "CIPixellate")
        filter?.setValue(logo, forKey: kCIInputImageKey)
        filter?.setValue(CIVector(x:150,y:150), forKey: kCIInputCenterKey)
        filter?.setValue(intensity, forKey: kCIInputScaleKey)
        let result = filter?.value(forKey: kCIOutputImageKey) as! CIImage
        let extent = result.extent
        let cgImage = context.createCGImage(result, from: extent)
        // result
        let processedImage = UIImage(cgImage: cgImage!)
        return processedImage

}

calling this code as

self.myImageView.image =  pixelateImage(UIImage(named:"test"),100)
anuraagdjain
  • 86
  • 2
  • 7
1

The iPhone is not a great device to be doing computationally–intensive tasks like image manipulation. If you're looking to improve the performance in displaying very high resolution images—possibly while performing some image processing tasks at the same time, look into using CATiledLayer. It's made to display the contents in tiled chunks so you can display/process content data only as needed on individual tiles.

xan
  • 7,511
  • 2
  • 32
  • 45
CIFilter
  • 8,647
  • 4
  • 46
  • 66
1

Actually, it's simple as this. Higher input scale key means more pixellation.

let filter = CIFilter(name: "CIPixellate")
filter?.setValue(inputImage, forKey: kCIInputImageKey)
filter?.setValue(30, forKey: kCIInputScaleKey)

let pixellatedCIImage = filter?.outputImage

The result is CIImage, you can convert it using UIImage using

UIImage(ciImage: pixellatedCIImage)
Ozgur Sahin
  • 1,305
  • 16
  • 24
0

I agree with @Xorlev. The only thing I would hope is (provided that you are using a lot of floating point operations) that you are building for arm6 and using thumb isa. In that case compile without -mthumb option and the performance might improve.

MHC
  • 6,405
  • 2
  • 25
  • 26
  • I am building for armv6 and armv7 (Standard).Will take a look into the thumb isa and -mthumb thing. Thanks! – George Feb 19 '11 at 14:48