7

I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.

To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.

My current technique is this:

  • Create a new UIView which has the dimensions of the original image.
  • Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
  • Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
  • Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.

It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.

So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.

Any great ideas? :)

iHunter
  • 6,205
  • 3
  • 38
  • 56
DanM
  • 7,037
  • 11
  • 51
  • 86
  • You should be able to manipulate the image data directly, vs using UI/CG stuff. Then the steps should be fairly fast. It's not too hard to figure out the image format. The main trick is that when you add the images together you need to divide each pixel by 4 (shift right 2) to assure that no overflows occur. – Hot Licks Feb 04 '12 at 13:58
  • You might also want to study up on "edge detection" for other schemes to achieve your goal. – Hot Licks Feb 04 '12 at 13:59
  • [This thread](http://stackoverflow.com/questions/6052188/high-quality-scaling-of-uiimage) may give you some ideas on how to "crack" the UIImage. – Hot Licks Feb 04 '12 at 14:02

4 Answers4

3

I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.

For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.

Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...

Cheers, hope this helps.

Elias Vasylenko
  • 1,524
  • 11
  • 21
  • This seems like the most straightforward approach. Will comment back if it works out. Thanks! – DanM Feb 11 '12 at 02:29
  • I made a small mistake with this answer, rather than a small blurred circle kernel it would be more appropriate to use a small anti-aliased circle kernel. An important distinction, sorry! It was after 5:35 on a friday when I posted that, and I leave work at 5:30 ;). – Elias Vasylenko Feb 14 '12 at 14:36
  • No problem -- my brain automatically swapped "blurred" for "antialiased" anyway. :) – DanM Feb 15 '12 at 15:46
2

For the sake of contributing new ideas:

A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.

You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).

That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.

Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.

Tommy
  • 99,986
  • 12
  • 185
  • 204
  • Actually, in this case, as long as I can test the user's device (i.e., make sure it's at least a 3GS, etc), and use an alternate method for older devices, that would be OK. Is there a good source for learning how I'd go about using that technique? – DanM Feb 07 '12 at 14:54
1

You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.

Caleb
  • 124,013
  • 19
  • 183
  • 272
  • 1
    Good advice generally but I don't think custom filters are supported on the iOS implementation of Core Image at present. So the author would probably need to transform to a CG bitmap context for the purposes of getting CPU access to contents and run the filter there (with assistance from GCD, hopefully). – Tommy Feb 06 '12 at 13:33
  • This does seem like a viable way to go -- and I assume it's possible to adjust the algorithm to draw a thicker edge than 1px. However, before I spend a lot of time implementing this (doesn't seem trivial) how efficient do you expect it would be, compared to the method I'm using now? – DanM Feb 06 '12 at 13:49
  • @Tommy Doh! [It looks like you're right about custom filters](http://www.raywenderlich.com/5689/beginning-core-image-in-ios-5) not being supported on iOS. Too bad. – Caleb Feb 06 '12 at 14:24
  • @DanM A nifty thing about Core Image filters is that they can run on the GPU, which is designed for efficiently processing pixels. But Tommy is right: ["In particular, the key difference is that Core Image on iOS does not include the ability to create custom image filters."](https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_intro/ci_intro.html#//apple_ref/doc/uid/TP30001185-CH201-TPXREF101) As he indicats, you can use a similar technique, but you'll have to run the filter yourself. – Caleb Feb 06 '12 at 14:30
0

instead using UIView, i suggest just push a context like following:

UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

this will be much faster than your UIView.

Allen
  • 6,505
  • 16
  • 19