0

1.Introduction:

So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.

Images are always nice, so look at this image to get what I'd like to achieve:

img

2.Explanation:

I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".

I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image. But tbh this solution doesn't seems to be a fast & efficient way at all!


func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
    let img: CGImage = image.cgImage!
    let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
    context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
    let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
        referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
    for i in 0..<img.height {
        for j in 0..<img.width {
            let pixel = 4 * (i * img.width + j)
            let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])])  // [h, s, l] integer array
            let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
            if (distance > threshold) {
                let setValue: UInt8 = 255
                binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
            }
        }
    }
    let outputImg = context.makeImage()!
    return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}


3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:

img


  1. My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.

    • So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
  2. Some Notes

    • So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.

    • The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px

  3. Thats all

Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)

  • Instead of converting every pixel to HSL, why don't you just make the target color in RGB. Also, can you post `calculateHSLDistance`? – mnistic Mar 17 '18 at 16:17
  • Because the RGB spectrum is way different to our human **color range** while HSL is nearly the same! @mnistic –  Mar 17 '18 at 16:36
  • 1
    It could well be that your RGB to HSV conversion and distance calculation are slow. (I guess `RGBtoHSL` is the function you've shown in another question.) If you must make the comparison in HSV, you could make a coarse check in RGB first and only investigte the HSV distance in detail when the colours are close. How is the performance if you only match the exact RGB colour? – M Oehm Mar 17 '18 at 16:52
  • Nearly about 2 times faster. But on larger images to slow anyway! @MOehm –  Mar 17 '18 at 17:22
  • 1
    HSL is not at all "nearly the same to our human color range" (whatever that means). Our eyes measure incoming light wavelengths in much the same was as an RGB camera does. Early vision converts this to tristimulus values (CIE Yxy tries to approximate this, CIE Lab does it better). HSL is an awkward color space invented to make it easier to input colors in a UI. It's not good for color analysis. Compare your colors in RGB, you'll get better results, and much faster. – Cris Luengo Mar 17 '18 at 22:01
  • @CrisLuengo +1 for that ... people are missplacing HSV/HSL with wavelength colors for few decades now. The sad thing is the young do not even know they are not correct at all and using that for computations making mess in physical data... see [RGB values of visible spectrum](https://stackoverflow.com/a/22681410/2521214) You can not even find real/correct spectral image on internet anymore leading me to that code and answer I linked as I needed it for simulation. The same goes for [black body color/ BV](https://stackoverflow.com/a/22630970/2521214) ... even equations for those are wrong ... – Spektre Mar 18 '18 at 10:13
  • 1
    @Spektre: thanks for those links, interesting reads! – Cris Luengo Mar 18 '18 at 13:52

2 Answers2

2

First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.

Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:

  1. When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.

  2. When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.

maxim1000
  • 6,297
  • 1
  • 23
  • 19
  • Yeah of course I know that I don't need to iterate thought EVERY single pixel but when I skip some pixels I really got no idea how to `"analyze its neighbourhood." afterwards`. –  Mar 17 '18 at 16:38
  • @tempi, do you already know which part of the function consumes most of time? That's where it's best to start... – maxim1000 Mar 17 '18 at 17:59
0

I do not code on your platform but...

Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:

  1. determine properties

    You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.

  2. create density map

    simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:

    int mx=xs/dx;
    int my=ys/dy;
    int map[mx][my],x,y,xx,yy;
    
    for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
     for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
      map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
    
  3. enlarge map set areas

    now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.

  4. process all set regions

    for (yy=0;yy<my;yy++)
     for (xx=0;xx<mx;xx++)
      if (map[xx][yy])
       for (y=yy*dy,y<(yy+1)*dy;y++)
        for (x=xx*dx,x<(xx+1)*dx;x++)
         if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
    

If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:

0 - no specific color is present
1 - inside of color area
2 - edge of color area

That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:

for (yy=0;yy<my;yy++)
 for (xx=0;xx<mx;xx++)
  if (map[xx][yy]==2)
   {
   for (y=yy*dy,y<(yy+1)*dy;y++)
    for (x=xx*dx,x<(xx+1)*dx;x++)
     if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
   } else if (map[xx][yy]==0)
   {
   for (y=yy*dy,y<(yy+1)*dy;y++)
    for (x=xx*dx,x<(xx+1)*dx;x++)
     pixel(x,y)=0x00000000;
   }

This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...

btw how long it takes to read and set your whole image?

for (y=0;y<ys;y++)
 for (x=0;x<xs;x++)
  pixel(x,y)=pixel(x,y)^0x00FFFFFF;

if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?

Spektre
  • 49,595
  • 11
  • 110
  • 380