2

I would like to detect an object that is focused in the camera, and cut out the unfocused background to and replace it with an image. Is this possible?

jscs
  • 63,694
  • 13
  • 151
  • 195
Deivuh
  • 204
  • 3
  • 11

1 Answers1

3

It is marginally possible, but requires some heavy computation. There is no ready made library for iOS that I know of. So if you ask, "Is it possible to do this easily?", I'd answer, I'm afraid not.

Tools that appear to do this will usually employ some shortcut such as

  • leverage face detection (i.e. they work as long as the "object" is a human face)
  • leverage area analysis (i.e. they check whatever is in the middle of the picture, be it focused or not)

Face recognition is available with, I believe, Quartz. Anyway, check out Face Recognition on the iPhone .

In the general and messy case, you can usually recognize a focused area (supposed existing) by analyzing either contrast or spatial frequency in the image, divided into small areas (usually 16x16 or 8x8 pixels). Those areas with low contrast, and/or where high frequencies are missing, are background, while the presence of high frequencies (sharpness) indicates a focused area.

This will not tell you the image boundaries and will also give several false positives (and possibly negatives); but at the end of this phase you will have a sampling of the original image with status of "focused", "unfocused" and "uncertain" for each texel of, say, 16 pixels side.

To this map you will have to apply heuristics, for example:

  • borders are usually part of background (hence should test positive)
  • a focused area of small dimensions is probably a false negative
  • an unfocused area of small dimensions is probably a false positive

At the end of this refinement phase you should have a few contiguous large "blobs" of focused areas. You then examine the borders of these, looking for sharp transitions in color or luminosity (usually in the direction radial from blob center). This is another heuristic: we are assuming that you won't focus a red apple on a red background, or if you do, the apple will have highlights that will show yellow, white or very light red and allow tracing a "contour".

Once each blob has its own contour, you can use it for a cutout.

You would have to do all this using tools such as Core Image

http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_intro/ci_intro.html

or, better suited to the task, vImage

http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vImage/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001001

Community
  • 1
  • 1
LSerni
  • 55,617
  • 10
  • 65
  • 107
  • 1
    It might be tricky to get Core Image to do this for you on iOS, with the lack of custom kernels there, but my hobby project might help: https://github.com/BradLarson/GPUImage . You might also be able to use image gradients to help identify sharp regions, because a lack of focus should suppress the strength of edges, and a Sobel kernel might be able to bring that out. – Brad Larson Sep 12 '12 at 16:02