3

I am trying to replicate the algorithm that is given in research paper regarding rating an image for a blur score

Please find below the function I have created. I have added the points in the comments on what I was trying to do.

def calculate_blur(image_name):
    img_1 = cv2.imread(image_name) # Reading the Image
    img_2 = np.fft.fft2(img_1) # Performing 2 dimensional fft on the image
    img_3 = np.fft.fftshift(img_2) #findind fc by shifting origin of F to centre
    img_4 = np.fft.ifftshift(img_3)
    af=np.abs(img_4) #Calculating the absolute value of centred Fourier Transform
    threshold=np.max(af)/1000# calculating the threshold value where the max value is calculated from absolute value
    Th=np.sum(img_2>threshold) #total number of pixels in F/img_2 whose pixel value>threshold 
    fm=Th/(img_1.shape[0]*img_1.shape[1]) #calculating the image quality measure(fm)
    if fm>0.05 : #Assuming fm>0.05 would be Not Blur (as I assumed from the results given in the research paper)
        value='Not Blur'
    else:
        value='Blur'
    return fm,value

I am seeing that when it is face closeup picture with appropriate light, even the images are blurry, the IQM score would be greater than 0.05 while for normal images(appropriate distance from the camera) that are clicked it is showing up good results.

I am sharing 2 pictures.

This has score of (0.2822434750792747, 'Not Blur')

blurry image detected False

This has a score of (0.035472916666666666, 'Blur')`

blurry image detected True

I am trying to understand how exactly it is working in the backend i.e deciding between the two and how to enhance my function and detection.

  • add you code and data – seralouk Jun 20 '20 at 16:50
  • @CrisLuengo Apologies for not giving a detailed explanation. I have edited it. Please let me know if any other information is required . – Abir Pattnaik Jun 20 '20 at 17:23
  • Hi @CrisLuengo, I was not acquainted with numpy and their functionalities related to fft and its assorted functions for which my development is slow. However, understaning from the paper, I was able to create it. It would be great if this could be verified. – Abir Pattnaik Jun 26 '20 at 20:52

1 Answers1

2

Your code seems to replicate the work in the paper.

Unfortunately, it is not at all this easy to determine if a picture is blurry or not. One can use this to compare multiple images of the same scene, to see which one is sharper or more blurry. If the illumination changes, or the contents of the scene changes, the comparison can no longer be made.

I am not aware of any fool-proof method to distinguish an out-of-focus image if there is no in-focus image to compare it to. All these methods will fail, telling you that a perfectly in-focus image of a white wall is out of focus.

The best one can do is compare the power (square of the magnitude of the frequency components) at higher frequencies to that at lower frequencies (using, for example, band-pass filters). This will tell you if the image contains any sharp edges or not. Of course, it will tell you the image is out of focus when the scene only contains smooth transitions and no sharp edges.

This other Q&A has some more ideas.


Nit pick:

img_4 = np.fft.ifftshift(img_3) undoes what img_3 = np.fft.fftshift(img_2) does, so that img_4 == img_2. Nonetheless, shifting the origin in the Fourier domain does not affect any of the subsequent processing, so it is irrelevant whether one uses img_2, img_3 or img_4 in the computations.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120