I want to take an image and invert the colors in iOS.
7 Answers
To expand on quixoto's answer and because I have relevant source code from a project of my own, if you were to need to drop to on-CPU pixel manipulation then the following, which I've added exposition to, should do the trick:
@implementation UIImage (NegativeImage)
- (UIImage *)negativeImage
{
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
return returnImage;
}
@end
So that adds a category method to UIImage that:
- creates a clear CoreGraphics bitmap context that it can access the memory of
- draws the UIImage to it
- runs through every pixel, converting from premultiplied alpha to uninflected RGB, inverting each channel separately, multiplying by alpha again and storing back
- gets an image from the context and wraps it into a UIImage
- cleans up after itself, and returns the UIImage

- 99,986
- 12
- 185
- 204
-
Thanks for supplying this code. How could I get this code to work with retina devices? e.g. if (retinaScale>0.0) { UIGraphicsBeginImageContextWithOptions(image.size, NO, retinaScale); } else { UIGraphicsBeginImageContext(image.size); } – elprl Jul 12 '12 at 10:18
-
@elprl there shouldn't be any difference between this on a retina device and a non-retina device — it operates directly on the UIImage, and the thing that has to deal with the actual practicalities of display is the UIImageView. – Tommy Jul 13 '12 at 18:50
-
11To make this code Retina-ready, use: `int width = size.width * self.scale; int height = size.height * self.scale;` – Vincent Tourraine Jul 26 '12 at 08:04
-
Hi Tommy, i need to set alpha color to this ..This code works gud Instead of white color i want to put transperent color so i guessed if we set alpha color may be it works .... Can u please suggest me how to do this ? Thanks in advance @Tommy – Babul Oct 25 '12 at 10:55
-
@Babul assuming I got the code right, the alpha value should be in `linePointer[3]` and you don't need to do any conversion on it or anything. The multiplies and divides on R, G and B are just because they're premultiplied by alpha — alpha itself isn't premultiplied by anything. – Tommy Oct 25 '12 at 20:16
-
3@VincentTourraine And also replace `UIImage *returnImage = [UIImage imageWithCGImage:cgImage]` with `[UIImage imageWithCGImage:cgImage scale:self.scale orientation:UIImageOrientationUp]` – Şafak Gezer Nov 24 '15 at 15:56
-
@Tommy Could you provide this in Swift? – Khaled Annajar Oct 10 '16 at 13:35
-
How do I call this? – Dec 24 '21 at 03:16
With CoreImage:
#import <CoreImage/CoreImage.h>
@implementation UIImage (ColorInverse)
+ (UIImage *)inverseColor:(UIImage *)image {
CIImage *coreImage = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:@"CIColorInvert"];
[filter setValue:coreImage forKey:kCIInputImageKey];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
return [UIImage imageWithCIImage:result];
}
@end

- 19,212
- 9
- 65
- 112

- 321
- 3
- 10
-
3Last line would be better as: [UIImage imageWithCIImage:result scale:image.scale orientation:image.imageOrientation]; so that scale and orientation of the original image are preserved. – prewett Jun 02 '15 at 01:04
-
Swift 3 update: (from @BadPirate Answer)
extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
let coreImage = UIKit.CIImage(image: self)
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setValue(coreImage, forKey: kCIInputImageKey)
guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
}
return UIImage(ciImage: result)
}
}

- 1,357
- 17
- 23
-
yes, 100% without this cg fix, the image was not displaying when used as a Button image. thanks – Paul Stevenson Oct 15 '22 at 03:53
Sure, it's possible-- one way is using the "difference" blend mode (kCGBlendModeDifference
). See this question (among others) for the outline of the code to set up the image processing. Use your image as the bottom (base) image, and then draw a pure white bitmap on top of it.
You can also do the per-pixel operation manually by getting the CGImageRef
and drawing it into a bitmap context, and then looping over the pixels in the bitmap context.
-
1Please supply code that could replace @Tommy's answer (which I'm using now). – meaning-matters Feb 10 '14 at 01:08
-
Created a swift extension to do just this. Also because CIImage based UIImages break down (most libraries assume CGImage is set) I added an option to return a UIImage that is based on a modified CIImage:
extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
let coreImage = UIKit.CIImage(image: self)
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setValue(coreImage, forKey: kCIInputImageKey)
guard let result = filter.valueForKey(kCIOutputImageKey) as? UIKit.CIImage else { return nil }
if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
return UIImage(CGImage: CIContext(options: nil).createCGImage(result, fromRect: result.extent))
}
return UIImage(CIImage: result)
}
}

- 25,802
- 10
- 92
- 123
Tommy answer is THE answer but I'd like to point out that could be a really intense and time consuming task for bigger images. There two frameworks that could help you in manipulating images:
- CoreImage
- Accelerator
And it really worth to mention the amazing GPUImage framework from Brad Larson, GPUImage makes the routines run on the GPU using custom fragment shader in OpenGlES 2.0 environment, with remarkable speed improvement. With CoreImge if a negative filter is available you can choose CPU or GPU, using Accelerator all routines run on CPU but using vector math image processing.

- 26,120
- 10
- 85
- 131
-
Please supply code that could replace @Tommy's answer (which I'm using now). – meaning-matters Feb 10 '14 at 01:06
-
Please, check here: https://developer.apple.com/library/mac/documentation/graphicsimaging/Conceptual/CoreImaging/ci_intro/ci_intro.html and here: https://developer.apple.com/library/ios/documentation/Performance/Conceptual/vImage/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001001 . On apple site you can also find some samples – Andrea Feb 10 '14 at 08:32
Updated to Swift 5 version of @MLBDG answer
extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
let coreImage = self.ciImage
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setValue(coreImage, forKey: kCIInputImageKey)
guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
}
return UIImage(ciImage: result)
}
}

- 5,987
- 8
- 76
- 112

- 101
- 1
- 3