6

I create an UIImage with backgroundcolor RED:

let theimage:UIImage=imageWithColor(UIColor(red: 1, green: 0, blue: 0, alpha: 1) );

func imageWithColor(color: UIColor) -> UIImage {
    let rect = CGRectMake(0.0, 0.0, 200.0, 200.0)
    UIGraphicsBeginImageContext(rect.size)
    let context = UIGraphicsGetCurrentContext()

    CGContextSetFillColorWithColor(context, color.CGColor)
    CGContextFillRect(context, rect)

    let image = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()

    return image
}

I am retrieving the color in the middle of the image as follow:

let h:CGFloat=theimage.size.height;
let w:CGFloat=theimage.size.width;


let test:UIColor=theimage.getPixelColor(CGPoint(x: 100, y: 100))

var rvalue:CGFloat = 0;
var gvalue:CGFloat = 0;
var bvalue:CGFloat = 0;
var alfaval:CGFloat = 0;
test.getRed(&rvalue, green: &gvalue, blue: &bvalue, alpha: &alfaval);


print("Blue Value : " + String(bvalue));
print("Red Value : " + String(rvalue));


extension UIImage {
    func getPixelColor(pos: CGPoint) -> UIColor {

        let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
        let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

        let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4

        let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
        let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
        let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
        let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)

        return UIColor(red: r, green: g, blue: b, alpha: a)
    }
}

As result I get :

Blue Value : 1.0 Red Value : 0.0

Why this ? I couldn't find the mistake.

mcfly soft
  • 11,289
  • 26
  • 98
  • 202
  • what does your getRed() do? – Satheesh Jan 04 '16 at 14:55
  • getRed() is swift standard functionality. This is not coded by me. – mcfly soft Jan 04 '16 at 14:57
  • ok gotcha, did you try printing the UIColor object that getPixelColor(), returns? – Satheesh Jan 04 '16 at 15:00
  • Yes . I can also get the info in XCode and the values are the same. Setting the Image to RED, returns BLUE. Setting the Image to GREEN, it return 0 to all R,G,B. I cannot understand. – mcfly soft Jan 04 '16 at 15:02
  • Does the image look correct when you display it? Have you tried returning the RGB values from your colour object rather than the image? There's always a chance that there's something odd going on with your img a rather than the getRed function – Russell Jan 04 '16 at 15:23
  • Yes the Image looks good. I made a lot of basic checks like this. I guess someone can only understand, when taking the code and test. I can simply switch blue and red and its ok, but this is really a mess and I would like to understand. – mcfly soft Jan 04 '16 at 15:54

1 Answers1

20

The problem is not the built-in getRed function, but rather the function that builds the UIColor object from the individual color components in the provider data. Your code is assuming that the provider data is stored in RGBA format, but it apparently is not. It would appear to be in ARGB format. Also, I'm not sure you have the byte order right, either.

When you have an image, there are a variety of ways of packing those into the provider data. A few examples are shown in the Quartz 2D Programming Guide:

enter image description here

If you're going to have a getPixelColor routine that is hard-coded for a particular format, I might check the alphaInfo and bitmapInfo like so (in Swift 4.2):

extension UIImage {
    func getPixelColor(point: CGPoint) -> UIColor? {
        guard let cgImage = cgImage,
            let pixelData = cgImage.dataProvider?.data
            else { return nil }

        let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

        let alphaInfo = cgImage.alphaInfo
        assert(alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst, "This routine expects alpha to be first component")

        let byteOrderInfo = cgImage.byteOrderInfo
        assert(byteOrderInfo == .order32Little || byteOrderInfo == .orderDefault, "This routine expects little-endian 32bit format")

        let bytesPerRow = cgImage.bytesPerRow
        let pixelInfo = Int(point.y) * bytesPerRow + Int(point.x) * 4;

        let a: CGFloat = CGFloat(data[pixelInfo+3]) / 255
        let r: CGFloat = CGFloat(data[pixelInfo+2]) / 255
        let g: CGFloat = CGFloat(data[pixelInfo+1]) / 255
        let b: CGFloat = CGFloat(data[pixelInfo  ]) / 255

        return UIColor(red: r, green: g, blue: b, alpha: a)
    }
}

And if you were to always build this image programmatically for code that is dependent upon the bit map info, I'd explicitly specify these details when I created the image:

func image(with color: UIColor, size: CGSize) -> UIImage? {
    let rect = CGRect(origin: .zero, size: size)
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    guard let context = CGContext(data: nil,
                                  width: Int(rect.width),
                                  height: Int(rect.height),
                                  bitsPerComponent: 8,
                                  bytesPerRow: Int(rect.width) * 4,
                                  space: colorSpace,
                                  bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) else {
        return nil
    }
    context.setFillColor(color.cgColor)
    context.fill(rect)
    return context.makeImage().flatMap { UIImage(cgImage: $0) }
}

Perhaps even better, as shown in Technical Q&A 1509, you might want to have getPixelData explicitly create its own context of a predetermined format, draw the image to that context, and now the code is not contingent upon the format of the original image to which you are applying this.

extension UIImage {

    func getPixelColor(point: CGPoint) -> UIColor? {
        guard let cgImage = cgImage else { return nil }

        let width = Int(size.width)
        let height = Int(size.height)
        let colorSpace = CGColorSpaceCreateDeviceRGB()

        guard let context = CGContext(data: nil,
                                      width: width,
                                      height: height,
                                      bitsPerComponent: 8,
                                      bytesPerRow: width * 4,
                                      space: colorSpace,
                                      bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
            else {
                return nil
        }

        context.draw(cgImage, in: CGRect(origin: .zero, size: size))

        guard let pixelBuffer = context.data else { return nil }

        let pointer = pixelBuffer.bindMemory(to: UInt32.self, capacity: width * height)
        let pixel = pointer[Int(point.y) * width + Int(point.x)]

        let r: CGFloat = CGFloat(red(for: pixel))   / 255
        let g: CGFloat = CGFloat(green(for: pixel)) / 255
        let b: CGFloat = CGFloat(blue(for: pixel))  / 255
        let a: CGFloat = CGFloat(alpha(for: pixel)) / 255

        return UIColor(red: r, green: g, blue: b, alpha: a)
    }

    private func alpha(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 24) & 255)
    }

    private func red(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 16) & 255)
    }

    private func green(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 8) & 255)
    }

    private func blue(for pixelData: UInt32) -> UInt8 {
        return UInt8((pixelData >> 0) & 255)
    }

    private func rgba(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) -> UInt32 {
        return (UInt32(alpha) << 24) | (UInt32(red) << 16) | (UInt32(green) << 8) | (UInt32(blue) << 0)
    }

}

Clearly, if you're going to check a bunch of pixels, you'll want to refactor this (decouple the creation of the standardized pixel buffer from the code that checks the color), but hopefully this illustrates the idea.


For earlier versions of Swift, see previous revision of this answer.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • Thank you very much for this effort to explain the issue ! – mcfly soft Jan 05 '16 at 18:42
  • 1
    Thank you for the bytesPerRow = CGImageGetBytesPerRow(cgImage) function! As it turned out, it returns 1248 pixels for the iPhone simulators with scale = 3.0. Therefore my function offset = ((width * y) + x) * 4 didn't work properly; after the replacement to offset = y * pixelsPerRow + x * 4, it began to work like a charm! – alc77 Aug 21 '18 at 13:40
  • 1
    @alc77 Yep, that `CGImageGetBytesPerRow` is pretty critical. Even if you considered the scale correctly, you can't even assume that bytes per row has any bearing on the byte offset row by row. Notably, cropped images can keep the original image's bytes per row, as a speed optimization. `CGImageGetBytesPerRow` is the safe/right way to do that. – Rob Aug 21 '18 at 16:14
  • Good answer! One quick point (question). Since you're enforcing alpha first, shouldn't alpha be at 'let a: CGFloat = CGFloat(data[pixelInfo]) / 255'? Thanks – Jan-Michael Tressler Oct 27 '19 at 16:48
  • 1
    @Jan-MichaelTressler - The `CGImageAlphaInfo` dictates the `ARGB` format, but it’s the `CGBitmapInfo` that dictates the order in which these appear. If you consider the 32-bit `ARGB` value of a pixel, if that integer is represented in little endian format (which I explicitly set to avoid any ambiguity), the B is the first byte, then G, R, and A. If that’s confusing, you can use `byteOrder32Big`, rather than the `byteOrder32Little` used above, and then the bytes would appear in the order A, R, G, and then B. – Rob Oct 27 '19 at 18:41
  • @Rob Awe I forgot about the byte ordering! Sorry for the oversight, and glad you cleared it up. – Jan-Michael Tressler Oct 27 '19 at 18:46