8

I am looking for a "very" simple way to check if an image bitmap is blur. I do not need accurate and complicate algorithm which involves fft, wavelet, etc. Just a very simple idea even if it is not accurate.

I've thought to compute the average euclidian distance between pixel (x,y) and pixel (x+1,y) considering their RGB components and then using a threshold but it works very bad. Any other idea?

user2923045
  • 369
  • 2
  • 6
  • 16
  • 1
    You can perhaps use the average variance of a sliding window to get a rough idea of how much high frequency content you have. Convert to greyscale first. The answers to [this question](http://stackoverflow.com/q/7765810/2065121) have many more options. – Roger Rowland Jan 14 '14 at 07:16
  • Possible duplicate of [Is there a way to detect if an image is blurry?](https://stackoverflow.com/questions/7765810/is-there-a-way-to-detect-if-an-image-is-blurry) – jtlz2 Jan 09 '19 at 10:34
  • I tried this It works https://medium.com/better-programming/blur-detection-via-metal-on-ios-16dd02cb1558 – Hope Aug 21 '20 at 09:05

3 Answers3

18

Don't calculate the average differences between adjacent pixels.

Even when a photograph is perfectly in focus, it can still contain large areas of uniform colour, like the sky for example. These will push down the average difference and mask the details you're interested in. What you really want to find is the maximum difference value.

Also, to speed things up, I wouldn't bother checking every pixel in the image. You should get reasonable results by checking along a grid of horizontal and vertical lines spaced, say, 10 pixels apart.

Here are the results of some tests with PHP's GD graphics functions using an image from Wikimedia Commons (Bokeh_Ipomea.jpg). The Sharpness values are simply the maximum pixel difference values as a percentage of 255 (I only looked in the green channel; you should probably convert to greyscale first). The numbers underneath show how long it took to process the image.

close-up of Ipomea flower, sharpness calculated as 71.0%

same image with slight blurring, sharpness is reduced to 36.1%

same image with severe blurring; sharpness is now 17.6%

If you want them, here are the source images I used:


Update:

There's a problem with this algorithm in that it relies on the image having a fairly high level of contrast as well as sharp focused edges. It can be improved by finding the maximum pixel difference (maxdiff), and finding the overall range of pixel values in a small area centred on this location (range). The sharpness is then calculated as follows:

sharpness = (maxdiff / (offset + range)) * (1.0 + offset / 255) * 100%

where offset is a parameter that reduces the effects of very small edges so that background noise does not affect the results significantly. (I used a value of 15.)

This produces fairly good results. Anything with a sharpness of less than 40% is probably out of focus. Here's are some examples (the locations of the maximum pixel difference and the 9×9 local search areas are also shown for reference):

"Pure Linen" by mystuart @ Flickr (source)

"Blurred Buty" by Ilya @ Flickr (source)

"Blurry Downtown" by Andy Arthur @ Flickr (source)

"blurry volcanic mound" by matt Dombrowski @ Flickr (source)

The results still aren't perfect, though. Subjects that are inherently blurry will always result in a low sharpness value:

"Clouds and sky" by William Warby @ Flickr (source)

Bokeh effects can produce sharp edges from point sources of light, even when they are completely out of focus:

"The Side" by HD41117 @ Flickr (source)

You commented that you want to be able to reject user-submitted photos that are out of focus. Since this technique isn't perfect, I would suggest that you instead notify the user if an image appears blurry instead of rejecting it altogether.

Community
  • 1
  • 1
r3mainer
  • 23,981
  • 3
  • 51
  • 88
  • Thanks @squeamish-ossifrage for your reply, but could you please provide me additional details? I would like to implement your method but it is not clear to me how it works exactly. Do you compute the color distance between a pixel and its neighbour for each pixel and then select the maxium? Is it correct? If so, suppose your image contains two adjacent pixels one white and one black. This will give the maximum distance of 255 and a sharpness of 100%. Is it correct? – user2923045 Jan 15 '14 at 08:11
  • This surely works and is simple as required. Only concern is time. Choose a sparse grid for speedy results. – sepdek Jan 15 '14 at 15:52
  • It works fine but unfortunately it is useful when you have several instances of the same picture and you want to determine which of them is the best (in terms of sharpness). My problem is different. In fact, I have a single instance of a picture and I want to determine its quality (in terms of sharpness) and reject it if it is blur... – user2923045 Jan 17 '14 at 08:27
  • @user2923045 I've updated my answer; please take a look. – r3mainer Jan 20 '14 at 23:53
  • @squeamishossifrage thanks for this detailed answer. Unfortunately, the pastebin link has now expired. Can you please share it again, it will be really helpful. – arvind.mohan Dec 15 '16 at 10:49
  • @arvind.mohan [No problem.](http://pastebin.com/8hc1sURE) (Note: I might have modified the code slightly, but it should still work. Unable to test at the moment.) – r3mainer Dec 15 '16 at 11:49
  • @squeamishossifrage I was really worried if this theory will work for my images, where there are only black and white colors are used. – arvind.mohan Dec 15 '16 at 12:11
  • 1
    @arvind.mohan If you mean greyscale images, then it should work fine. If your images are just black and white (i.e., 1 bit per pixel), then it won't work at all. I suggest you just try it for yourself. – r3mainer Dec 15 '16 at 13:32
  • @squeamishossifrage Link http://pastebin.com/5360KCfQ is not working. Can you please share your code again? – eazery Jul 12 '17 at 14:46
  • Hey, thanks for the effort you took on answering this question. One thing is unclear to me. In your answer you did put the formula for calculating the sharpness as `sharpness = (maxdiff / (offset + range)) * (1.0 + offset / 255) * 100%` and in the pastebin code it says `$sharpness = ($maxdiff / (15 + $maxv - $minv)) * 27000 / 255;` where 15 would be the offset but 27000 is nowhere near the result of `1.0 + 15 / 255`. Can you clarify on how you came to this formula and which is the one that led to the results you posted above? – floriankrueger Aug 15 '17 at 10:58
  • 1
    @floriankrueger Sorry, I didn't explain that well. The value was chosen to output a value of 0.0 when `$maxv-$minv==0` and 100.0 when `$maxv-$minv==255`. – r3mainer Aug 21 '17 at 10:09
  • @r3mainer can you please share the code/app name were you able to find the sharpness at different regions of the image? – PolarBear10 Feb 03 '21 at 14:48
  • @PolarBear10 It's still here: https://pastebin.com/8hc1sURE – r3mainer Feb 04 '21 at 12:56
3

I suppose that, philosophically speaking, all natural images are blurry...How blurry and to which amount, is something that depends upon your application. Broadly speaking, the blurriness or sharpness of images can be measured in various ways. As a first easy attempt I would check for the energy of the image, defined as the normalised summation of the squared pixel values:

     1     2
E = --- Σ I,     where I the image and N the number of pixels (defined for grayscale)
     N

First you may apply a Laplacian of Gaussian (LoG) filter to detect the "energetic" areas of the image and then check the energy. The blurry image should show considerably lower energy.

See an example in MATLAB using a typical grayscale lena image:
This is the original image This is the original image This is the blurry image, blurred with gaussian noise This is the blurry image, blurred with gaussian noise This is the LoG image of the original This is the LoG image of the original And this is the LoG image of the blurry one This is the LoG image of the blurry image

If you just compute the energy of the two LoG images you get:

E  = 1265       E  = 88
 or              bl

which is a huge amount of difference...
Then you just have to select a threshold to judge which amount of energy is good for your application...

sepdek
  • 1,402
  • 9
  • 11
  • The original image has a lot of granular noise that was eliminated by your gaussian filter. A real photo would still contain the same amount of noise, even if it was incorrectly focused. – r3mainer Jan 15 '14 at 10:32
  • @squeamishossifrage I strongly disagree with your opinion. To my understanding, there is no way you have a blurry image (or incorrectly focused) and still be "granular" enough to get high energy... – sepdek Jan 15 '14 at 14:47
  • Let me try to explain, then. Look at the background of your original image, in the region above the hat. The image is clearly noisy here, even though it is already out of focus. This reflects the [granularity](http://en.wikipedia.org/wiki/Film_grain) of the film on which this photograph was taken. On the other hand, photographs taken on digital cameras are affected by [dark current](http://en.wikipedia.org/wiki/Dark_current_(physics)) and [thermal noise](http://en.wikipedia.org/wiki/Thermal_noise). In both cases, this noise will have a significant effect on the mean energy of the image. – r3mainer Jan 15 '14 at 16:32
  • @squeamishossifrage I can understand this but my argument was that this "granularity" will not contribute significantly to an increase in energy. I have tested the method I proposed with various images, also the images in your example and my approach still holds. – sepdek Jan 15 '14 at 19:44
  • @squeamishossifrage ...continuing... I have also implemented your method in MATLAB which is much easier and tried it using the whole image and not just samples on a grid. The results obtained by both approaches are consistent. But, clearly, my approach will be slower than yours, which uses simpler operations and surely could become even faster if your grid becomes sparser (enough to still represent the image). – sepdek Jan 15 '14 at 19:44
  • I have used sepdek's approach with limited success. The rub comes with "you just have to select a threshold". Selecting this threshold requires the problem to be bound more than it typically is. – don_q Sep 26 '14 at 15:26
1

calculate the average L1-distance of adjacent pixels:

N1=1/(2*N_pixel) * sum( abs(p(x,y)-p(x-1,y)) + abs(p(x,y)-p(x,y-1)) )

then the average L2 distance:

N2= 1/(2*N_pixel) * sum( (p(x,y)-p(x-1,y))^2 + (p(x,y)-p(x,y-1))^2  )

then the ratio N2 / (N1*N1) is a measure of blurriness. This is for grayscale images, for color you do this for each channel separately.

pentadecagon
  • 4,717
  • 2
  • 18
  • 26
  • Thanks @pentadecagon. Do you suggest to convert the image in grayscale or repeat the above task for each channel and then aggregate the results in someway? – user2923045 Jan 14 '14 at 08:06
  • Hello @pentadecagon, can you please provide the exact code to blur the image. Thanks so much. – Amit Thakur Dec 18 '18 at 11:57