28

For a given multi-color PNG UIImage (with transparency), what is the best/Swift-idiomatic way to:

  1. create a duplicate UIImage
  2. find all black pixels in the copy and change them to red
  3. (return the modified copy)

There are a few related questions on SO but I haven't been able to find something that works.

mjswensen
  • 3,024
  • 4
  • 28
  • 26
  • One thing I can suggest is going through every pixel and manually changing it if its black. – Aggressor Jul 27 '15 at 18:55
  • 1
    Indeed... my question is _how_ to do that :) I am new to Swift and am so unfamiliar with the APIs that I don't even know what to Google. – mjswensen Jul 27 '15 at 18:57
  • @mjswensen didn't processing speed was a barrier for you? I m using the same below code for exactly the same scenario you have mentioned, but it's taking 4-5 secs – Raja Saad May 24 '21 at 08:55

2 Answers2

62

You have to extract the pixel buffer of the image, at which point you can loop through, changing pixels as you see fit. At the end, create a new image from the buffer.

In Swift 3, this looks like:

func processPixels(in image: UIImage) -> UIImage? {
    guard let inputCGImage = image.cgImage else {
        print("unable to get cgImage")
        return nil
    }
    let colorSpace       = CGColorSpaceCreateDeviceRGB()
    let width            = inputCGImage.width
    let height           = inputCGImage.height
    let bytesPerPixel    = 4
    let bitsPerComponent = 8
    let bytesPerRow      = bytesPerPixel * width
    let bitmapInfo       = RGBA32.bitmapInfo

    guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
        print("unable to create context")
        return nil
    }
    context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))

    guard let buffer = context.data else {
        print("unable to get context data")
        return nil
    }

    let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)

    for row in 0 ..< Int(height) {
        for column in 0 ..< Int(width) {
            let offset = row * width + column
            if pixelBuffer[offset] == .black {
                pixelBuffer[offset] = .red
            }
        }
    }

    let outputCGImage = context.makeImage()!
    let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)

    return outputImage
}

struct RGBA32: Equatable {
    private var color: UInt32

    var redComponent: UInt8 {
        return UInt8((color >> 24) & 255)
    }

    var greenComponent: UInt8 {
        return UInt8((color >> 16) & 255)
    }

    var blueComponent: UInt8 {
        return UInt8((color >> 8) & 255)
    }

    var alphaComponent: UInt8 {
        return UInt8((color >> 0) & 255)
    }        

    init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
        let red   = UInt32(red)
        let green = UInt32(green)
        let blue  = UInt32(blue)
        let alpha = UInt32(alpha)
        color = (red << 24) | (green << 16) | (blue << 8) | (alpha << 0)
    }

    static let red     = RGBA32(red: 255, green: 0,   blue: 0,   alpha: 255)
    static let green   = RGBA32(red: 0,   green: 255, blue: 0,   alpha: 255)
    static let blue    = RGBA32(red: 0,   green: 0,   blue: 255, alpha: 255)
    static let white   = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
    static let black   = RGBA32(red: 0,   green: 0,   blue: 0,   alpha: 255)
    static let magenta = RGBA32(red: 255, green: 0,   blue: 255, alpha: 255)
    static let yellow  = RGBA32(red: 255, green: 255, blue: 0,   alpha: 255)
    static let cyan    = RGBA32(red: 0,   green: 255, blue: 255, alpha: 255)

    static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue

    static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
        return lhs.color == rhs.color
    }
}

For Swift 2 rendition, see previous revision of this answer.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • Rob, thank you for your prompt and thorough response! I am running into the following runtime error when I try your code: `: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 352 bytes/row. fatal error: unexpectedly found nil while unwrapping an Optional value` - Could it be the `colorSpace`? – mjswensen Jul 27 '15 at 19:39
  • `Resizable iPad / iOS 8.4 (12H141)`. Thank you again for your help. – mjswensen Jul 27 '15 at 19:48
  • 1
    Yeah, that simulator works fine for me. The `kCGImageAlphaNone` in your error message is highly suspect. It's almost like you're passing `0` for the last parameter of `CGBitmapContextCreate`. Double check that `bitmapInfo` parameter. Try simply `CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue)` for `bitmapInfo`. – Rob Jul 27 '15 at 19:55
  • Yep, that was it! It works great now. Thank you again for your time and expertise! – mjswensen Jul 27 '15 at 20:00
  • I am trying the code in this answer, but the image I get as a result seems to be magnifyed, as if there was a confusion between pixels and points. Am I missing something? – Michel Oct 24 '15 at 16:05
  • @Michel - I don't think this would really magnify it, though it might seem like it was if you provided it a retina image (e.g. if you provided an image with a scale of 2, you'd get an image whose dimensions were twice as great, but with a scale of 1, which is really the same set of pixels, but just a different scale factor applied). But I've modified the routine above to preserve the scale (and the orientation) to avoid that confusion. – Rob Oct 24 '15 at 17:04
  • This how I build theImage before feeding it to processPixelsInImage. Is something wrong? UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0); view.layer.renderInContext(UIGraphicsGetCurrentContext()!) let theImage:UIImage = UIGraphicsGetImageFromCurrentImageContext() – Michel Oct 24 '15 at 17:15
  • I have only one version of your code, the one on this post(answer). I don't know if this is the revised version or not. Beside if I do the same as you and do not get the same result, I must be making a mistake when setting the new image, I am actually not sure about the correct way to do that. I tried 2 ways, they both expand the image. How do you do it? – Michel Oct 26 '15 at 11:42
  • In response to your earlier comment, I changed the code in my answer. See its [revision history](http://stackoverflow.com/posts/31661519/revisions), where I create the `UIImage` from the `CGImage`, now preserving the `scale` (and `orientation`) of the original image. – Rob Oct 26 '15 at 14:24
  • Now that works as you say. Thanks a lot for your help. It will allow me to move forward in my project. – Michel Oct 27 '15 at 01:09
  • Great answer: Briefly explain what has to be done, then provide working code. – Nicolas Miari Jan 05 '16 at 02:06
  • @SNos - It works fine for me on Swift 2.2. I'd suggest you post a separate question illustrating your problem. BTW, I have added Swift 3.0 implementation, though. (And I also got rid of those global `red`, `green`, `blue`, etc., functions and wrapped it all in a `RGBA32` struct.) – Rob Jun 17 '16 at 00:27
  • Just used it as a class and works fine.. How did you converted it for swift 3? – SNos Jun 17 '16 at 09:12
  • Usually you can let the Xcode converter do most of the heavy lifting. Here, I just went through the issues the compiler raised one by one. It's all fairly self explanatory. It just takes a few minutes. – Rob Jun 17 '16 at 10:22
  • can we have the objective-c version for the above ? – Rahul Vyas Oct 01 '16 at 05:48
  • You can clean this up, but this illustrates the basic Objective-C concept, which is nearly identical to the above: https://gist.github.com/robertmryan/b89cf29a4b4e69abb02fcfd6640bef51 – Rob Oct 01 '16 at 07:03
  • Here is the swift3 for a transparent color let clear = RGBA32(red: 0, green: 0, blue: 0, alpha: 0) – Micah Montoya Dec 14 '16 at 14:36
  • It appears that the image that this returns has all zero color values. Any ideas? – modesitt May 06 '17 at 21:36
  • RGBA32 gives the compiler an error "Expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions" for the color definition in Swift 4. – Chewie The Chorkie Feb 01 '18 at 17:30
  • 1
    @VagueExplanation - Whenever that happens, split the offending line into separate statements. See how I did that in `init` method in my revised answer, above. – Rob Feb 02 '18 at 06:35
  • @Rob and all others, it does take some time due to the nested loop over rows and columns or width and height. I am posting the android code here playing with Bitmap, which is working fine with only one loop, and is fast as well. – Raja Saad May 24 '21 at 08:23
  • @Rob thanks for your comment, I was trying the same code for an image of size 375*375. And I was changing the pixels color of like 5% - to 15% of the image, and it was taking 3-4 seconds for me on Simulator. – Raja Saad May 24 '21 at 12:40
  • I don't get the second part of your comment @Rob The Interim Step...... – Raja Saad May 24 '21 at 12:45
  • @RajaSaad Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/232837/discussion-between-rob-and-raja-saad). – Rob May 25 '21 at 02:08
  • @Rob thanks a lot man, its working fine on Release build. Saved a lot of time. – Raja Saad May 25 '21 at 06:18
  • @Rob once again joining the comments section! I am not eligible for chat. So, commenting my concerns here. I have an image of 1080*1080 with a size of 1.3 MB, its not working efficiently even for the adhoc and release build. What I feel is using CGImage is causing this. What if I go for CIImage? would it effect the performance? – Raja Saad Sep 10 '21 at 08:15
  • @RajaSaad - On my iPhone 12 Pro Max, the processing of a 1080×1080 px image with the above color substitution took 0.03 sec in a release build. You can parallelize the routine, but there just isn't enough going on computationally (at least with my simple color substitution) to justify the overhead. I only started to see processing time improvements in parallel processing when images started to exceed 10,000×10,000. You should probably should just try both `CIImage` and `CGImage` approaches and benchmark both. Or post your example to https://codereview.stackexchange.com. – Rob Sep 10 '21 at 15:56
  • https://codereview.stackexchange.com/questions/267889/changing-the-color-of-some-pixels-in-an-image-in-swift @Rob – Raja Saad Sep 11 '21 at 12:41
  • 1
    @Rob thanks a lot for saving my time!! The way you described each and every issue and solution to that was more than amazing. Now my code is working like a charm after I followed your instructions and your code snippet. To every one facing this kind of issue, or want to work on changing image pixels color, please go through the question link in the above comment and enjoy the thoroughly explained answer by Rob #Respect – Raja Saad Sep 13 '21 at 08:15
2

For getting better result we can search color range in image pixels, referring to @Rob answer I made update and now the result is better.

func processByPixel(in image: UIImage) -> UIImage? {

    guard let inputCGImage = image.cgImage else { print("unable to get cgImage"); return nil }
    let colorSpace       = CGColorSpaceCreateDeviceRGB()
    let width            = inputCGImage.width
    let height           = inputCGImage.height
    let bytesPerPixel    = 4
    let bitsPerComponent = 8
    let bytesPerRow      = bytesPerPixel * width
    let bitmapInfo       = RGBA32.bitmapInfo

    guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
        print("Cannot create context!"); return nil
    }
    context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))

    guard let buffer = context.data else { print("Cannot get context data!"); return nil }

    let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)

    for row in 0 ..< Int(height) {
        for column in 0 ..< Int(width) {
            let offset = row * width + column

           /*
             * Here I'm looking for color : RGBA32(red: 231, green: 239, blue: 247, alpha: 255)
             * and I will convert pixels color that in range of above color to transparent
             * so comparetion can done like this (pixelColorRedComp >= ourColorRedComp - 1 && pixelColorRedComp <= ourColorRedComp + 1 && green && blue)
             */

            if pixelBuffer[offset].redComponent >=  230 && pixelBuffer[offset].redComponent <=  232 &&
                pixelBuffer[offset].greenComponent >=  238 && pixelBuffer[offset].greenComponent <=  240 &&
                pixelBuffer[offset].blueComponent >= 246 && pixelBuffer[offset].blueComponent <= 248 &&
                pixelBuffer[offset].alphaComponent == 255 {
                pixelBuffer[offset] = .transparent
            }
        }
    }

    let outputCGImage = context.makeImage()!
    let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)

    return outputImage
}

I hope this helps someone

Coder ACJHP
  • 1,940
  • 1
  • 20
  • 34
  • it does take some time due to the nested loop over rows and columns or width and height. I can post the android code here playing with Bitmap, which is working fine with only one loop, and is fast as well. – Raja Saad May 24 '21 at 08:30