I tried different solutions (e.g. this one), but the color I get back looks a bit different than in the real image. I guess it's because the image is only RGB
, not RGBA
. May that be an issue?
Related issue: if the UIImage has contentMode = .scaleAspectFill
, do I have to do a recalculation of the image or can I just use imageView.image
?
EDIT:
I tried with this extension:
extension CALayer {
func getPixelColor(point: CGPoint) -> CGColor {
var pixel: [CUnsignedChar] = [0, 0, 0, 0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: &pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -point.x, y: -point.y)
self.render(in: context!)
let red: CGFloat = CGFloat(pixel[0]) / 255.0
let green: CGFloat = CGFloat(pixel[1]) / 255.0
let blue: CGFloat = CGFloat(pixel[2]) / 255.0
let alpha: CGFloat = CGFloat(pixel[3]) / 255.0
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color.cgColor
}
}
but for some images it seems as if the coordinate system is turned around, for others I get really wrong values... what am I missing here?
EDIT 2:
I try with these images:
https://dl.dropboxusercontent.com/u/119600/gradient.png https://dl.dropboxusercontent.com/u/119600/gradient@2x.png
but I do get wrong values. They are embedded in a UIImageView
but I convert the coordinates:
private func convertScreenPointToImage(point: CGPoint) -> CGPoint {
let widthMultiplier = gradientImage.size.width / UIScreen.main.bounds.width
let heightMultiplier = gradientImage.size.height / UIScreen.main.bounds.height
return CGPoint(x: point.x * widthMultiplier, y: point.y * heightMultiplier)
}
This one
gives me === Optional((51, 76, 184, 255))
when running on the iPhone 7 simulator, which is not correct...