13

I was recently attempting to convert the code from here into Swift. However, I keep getting a white color, no matter the image. Here's my code:

// Playground - noun: a place where people can play

import UIKit

extension UIImage {
    func averageColor() -> UIColor {
        var colorSpace = CGColorSpaceCreateDeviceRGB()
        var rgba: [CGFloat] = [0,0,0,0]
        var context = CGBitmapContextCreate(&rgba, 1, 1, 8, 4, colorSpace, CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedLast.toRaw())!)
        rgba

        CGContextDrawImage(context, CGRectMake(0, 0, 1, 1), self.CGImage)

        if rgba[3] > 0 {
            var alpha = rgba[3] / 255
            var multiplier = alpha / 255
            return UIColor(red: rgba[0] * multiplier, green: rgba[1] * multiplier, blue: rgba[2] * multiplier, alpha: alpha)
        } else {
            return UIColor(red: rgba[0] / 255, green: rgba[1] / 255, blue: rgba[2] / 255, alpha: rgba[3] / 255)
        }
    }
}

var img = UIImage(data: NSData(contentsOfURL: NSURL(string: "http://upload.wikimedia.org/wikipedia/commons/c/c3/Aurora_as_seen_by_IMAGE.PNG")))

img.averageColor()

Thanks in advance.

Zoyt
  • 4,809
  • 6
  • 33
  • 45
  • You need to use var rgba: [UInt8] = [0,0,0,0] – Epic Byte May 06 '15 at 12:47
  • 1
    Never, ever calculate this manually :) Use the GPU. it's very easy: https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage – Fattie Aug 13 '19 at 16:53

5 Answers5

18

CoreImage in iOS 9: use the CIAreaAverage filter and pass the extent of your entire image to be averaged.

Plus, it's much faster since it'll either be running on the GPU or as a highly-optimized CPU CIKernel.

David Sowsy
  • 1,680
  • 14
  • 13
God of Biscuits
  • 1,312
  • 1
  • 13
  • 27
  • Have you actually profiled it @god-of-biscuits? I think you'll be surprised to see that the CoreImage is slower in this case, though the CIAreaAverage filter gives better results. – postmechanical Oct 30 '15 at 19:49
  • Are you sure that you're setting up CIContext only once and not tearing one down and bringing another up for each image run-through? – God of Biscuits Mar 18 '16 at 03:36
  • I agree here, if you setup CIContext only once the performance of CoreImage is impressive. The problem comes when you want to try to extract the colour of the image efficiently. I am yet to find an efficient way of doing so. – Danny Bravo Mar 23 '16 at 17:09
  • 1
    Look at the ImageIO APIs. They let you operate on very large images with very small memory footprint. Does all the tiling and slicing and dicing and julienning :). Very straightforward APIs. – God of Biscuits Mar 24 '16 at 01:04
  • Do you have some code as example @GodofBiscuits please? – Tulleb Aug 25 '16 at 11:09
  • FYI: This is a very slow operation. You may want to rescale images if you decide to take this approach. – Dan Loewenherz May 24 '17 at 00:16
  • Don't scale the images...that's compute-heavy as well. While you're still in a CIContext, just supply areas -- e.g., numCols & numRows and have a CIFilter do it for you. Core Image will concatenate (and in some cases, just reuse) Filters. And you won't ever be burdened with extra bitmaps if you never want to use the modified filter outside of the view. – God of Biscuits May 24 '17 at 06:10
  • I'm pretty sure that the CI Filters do the rescaling by smoothing and sampling, smoothing and sampling, so doing your own scaling is slower by dint of being redundant. The objectives in scaling images, besides making many ops faster, is to end up with an image with as similar a color profile as possible to the original, and to make it as viewable as possible at the smaller sizes (e.g., can you still tell it's a pic of a rose when scaled down to a super-small thumbnail?) – God of Biscuits Aug 20 '17 at 22:25
15
import UIKit

extension UIImage {
    func areaAverage() -> UIColor {
        var bitmap = [UInt8](count: 4, repeatedValue: 0)

        if #available(iOS 9.0, *) {
            // Get average color.
            let context = CIContext()
            let inputImage = CIImage ?? CoreImage.CIImage(CGImage: CGImage!)
            let extent = inputImage.extent
            let inputExtent = CIVector(x: extent.origin.x, y: extent.origin.y, z: extent.size.width, w: extent.size.height)
            let filter = CIFilter(name: "CIAreaAverage", withInputParameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: inputExtent])!
            let outputImage = filter.outputImage!
            let outputExtent = outputImage.extent
            assert(outputExtent.size.width == 1 && outputExtent.size.height == 1)

            // Render to bitmap.
            context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())
        } else {
            // Create 1x1 context that interpolates pixels when drawing to it.
            let context = CGBitmapContextCreate(&bitmap, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo.ByteOrderDefault.rawValue | CGImageAlphaInfo.PremultipliedLast.rawValue)!
            let inputImage = CGImage ?? CIContext().createCGImage(CIImage!, fromRect: CIImage!.extent)

            // Render to bitmap.
            CGContextDrawImage(context, CGRect(x: 0, y: 0, width: 1, height: 1), inputImage)
        }

        // Compute result.
        let result = UIColor(red: CGFloat(bitmap[0]) / 255.0, green: CGFloat(bitmap[1]) / 255.0, blue: CGFloat(bitmap[2]) / 255.0, alpha: CGFloat(bitmap[3]) / 255.0)
        return result
    }
}

Swift 3

func areaAverage() -> UIColor {
        var bitmap = [UInt8](repeating: 0, count: 4)

        if #available(iOS 9.0, *) {
            // Get average color.
            let context = CIContext()
            let inputImage: CIImage = ciImage ?? CoreImage.CIImage(cgImage: cgImage!)
            let extent = inputImage.extent
            let inputExtent = CIVector(x: extent.origin.x, y: extent.origin.y, z: extent.size.width, w: extent.size.height)
            let filter = CIFilter(name: "CIAreaAverage", withInputParameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: inputExtent])!
            let outputImage = filter.outputImage!
            let outputExtent = outputImage.extent
            assert(outputExtent.size.width == 1 && outputExtent.size.height == 1)

            // Render to bitmap.
            context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())
        } else {
            // Create 1x1 context that interpolates pixels when drawing to it.
            let context = CGContext(data: &bitmap, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
            let inputImage = cgImage ?? CIContext().createCGImage(ciImage!, from: ciImage!.extent)

            // Render to bitmap.
            context.draw(inputImage!, in: CGRect(x: 0, y: 0, width: 1, height: 1))
        }

        // Compute result.
        let result = UIColor(red: CGFloat(bitmap[0]) / 255.0, green: CGFloat(bitmap[1]) / 255.0, blue: CGFloat(bitmap[2]) / 255.0, alpha: CGFloat(bitmap[3]) / 255.0)
        return result
    }
Arsonik
  • 2,276
  • 1
  • 16
  • 24
Etan
  • 17,014
  • 17
  • 89
  • 148
  • 3
    for some reason the iOS 9 version of this code gives me back a constant gray color for different images? – Danny Bravo Mar 23 '16 at 17:12
  • iOS 9 swift 3 Code is still consuming the most CPU usage in my app. its so expensive , is there any other way of doing that ? – Maryam Fekri Jun 27 '18 at 15:28
9

Here's a solution:

func averageColor() -> UIColor {

    let rgba = UnsafeMutablePointer<CUnsignedChar>.alloc(4)
    let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
    let info = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue)
    let context: CGContextRef = CGBitmapContextCreate(rgba, 1, 1, 8, 4, colorSpace, info)

    CGContextDrawImage(context, CGRectMake(0, 0, 1, 1), self.CGImage)

    if rgba[3] > 0 {

        let alpha: CGFloat = CGFloat(rgba[3]) / 255.0
        let multiplier: CGFloat = alpha / 255.0

        return UIColor(red: CGFloat(rgba[0]) * multiplier, green: CGFloat(rgba[1]) * multiplier, blue: CGFloat(rgba[2]) * multiplier, alpha: alpha)

    } else {

        return UIColor(red: CGFloat(rgba[0]) / 255.0, green: CGFloat(rgba[1]) / 255.0, blue: CGFloat(rgba[2]) / 255.0, alpha: CGFloat(rgba[3]) / 255.0)
    }
}
Dan Loewenherz
  • 10,879
  • 7
  • 50
  • 81
Carmelo Gallo
  • 273
  • 4
  • 12
8

Swift 3:

func areaAverage() -> UIColor {

    var bitmap = [UInt8](repeating: 0, count: 4)

    let context = CIContext(options: nil)
    let cgImg = context.createCGImage(CoreImage.CIImage(cgImage: self.cgImage!), from: CoreImage.CIImage(cgImage: self.cgImage!).extent)

    let inputImage = CIImage(cgImage: cgImg!)
    let extent = inputImage.extent
    let inputExtent = CIVector(x: extent.origin.x, y: extent.origin.y, z: extent.size.width, w: extent.size.height)
    let filter = CIFilter(name: "CIAreaAverage", withInputParameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: inputExtent])!
    let outputImage = filter.outputImage!
    let outputExtent = outputImage.extent
    assert(outputExtent.size.width == 1 && outputExtent.size.height == 1)

    // Render to bitmap.
    context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())

    // Compute result.
    let result = UIColor(red: CGFloat(bitmap[0]) / 255.0, green: CGFloat(bitmap[1]) / 255.0, blue: CGFloat(bitmap[2]) / 255.0, alpha: CGFloat(bitmap[3]) / 255.0)
    return result
}
Alessign
  • 768
  • 9
  • 17
0

Are you setting up your context correctly? If I look at the documentation for the CGBitmapContext Reference:

https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CGBitmapContext/index.html#//apple_ref/c/func/CGBitmapContextCreate

it looks like you are only allocating enough memory for the image that would be contained in the CGFloat array. It also looks like you are telling the compiler that your image is only going to be one pixel by one pixel.

It looks like that size is also being confirmed as one pixel by one pixel when you are setting your CGRect in CGContextDrawImage.

If the Playground is only creating an image one pixel by one pixel, that would explain why you are only seeing a white screen.

Janie Larson
  • 504
  • 4
  • 10
  • I'm basically copying the code from the link I included (http://www.bobbygeorgescu.com/2011/08/finding-average-color-of-uiimage/) and converting it to Swift. What it's trying to do is have Quartz draw the entire image into one pixel, therefore smoothing the image and giving one pixel with the average color. It's clever, in my opinion, if it worked in Swift. I believe my issue either has to do with the way my array is initiated/reference or, as you mentioned, the amount of allocated memory. Do you mind trying to give me a more exact answer on what's wrong? Thanks! – Zoyt Oct 13 '14 at 01:44
  • If you look at the console in the playground you will see that the array of floats representing the colors never changes from zero. I believe the reason you are not seeing any color is because the array never changes. You are never calling the first part of the statement because the alpha is never greater than zero. Additionally, everything would be multiplied by zero and nothing would change. I am noticing that you have a line in the code after you set the context where you just have "rgba" on its own. I am wondering why it is there. Were you going to do something to change the values? – Janie Larson Oct 13 '14 at 12:45
  • I was initially confused because in the past when I have worked with Quartz and you had white, white represents a value of one or greater. I would have expected zeros across the board to be black instead of white. – Janie Larson Oct 13 '14 at 12:47
  • Ah. Sorry, I was mistaken. The alpha is equal to 0, so it appeared as white. My bad. – Zoyt Oct 13 '14 at 19:16
  • And when I put "rgba" on its own that was so I could see the value there in the playground. – Zoyt Oct 13 '14 at 19:19
  • These days almost every framework having to do with graphics and computation on graphics is backed by Metal Performance Shaders, and likely the reductive CIFilters (average, stdev, etc) are done on downsampled working images In other words, the same basic rule that applied to UI and other programming now also applies to graphics primitives: use the highest level/most abstract first, then go more concrete only if you need to. – God of Biscuits Oct 27 '18 at 06:45