I want to detect blurred images using Laplacian Operator. This is the code I am using:
bool checkforblur(Mat img)
{
bool is_blur = 0;
Mat gray,laplacianImage;
Scalar mean, stddev, mean1, stddev1;
double variance1,variance2,threshold;
cvtColor(img, gray, CV_BGR2GRAY);
Laplacian(gray, laplacianImage, CV_64F);
meanStdDev(laplacianImage, mean, stddev, Mat());
meanStdDev(gray, mean1, stddev1, Mat());
variance1 = stddev.val[0]*stddev.val[0];
variance2 = stddev1.val[0]*stddev1.val[0];
double ratio= variance1/variance2;
threshold = 90;
cout<<"Variance is:"<<ratio<<"\n"<<"Threshold Used:"
<<threshold<<endl;
if (ratio <= threshold){is_blur=1;}
return is_blur;
}
This code takes an image as input and returns 1 or 0 based on whether the image is blurred or not. As suggested I edited the code to check for ratio instead of variance of the laplacian image alone.
But still the threshold varies for images taken with different cameras.
Is the code scene dependent?
How should I change it?
Example:
For the above image the variance is 62.9 So it detects that the image is blurred.
For the above image the variance is 235, Hence it is detecting wrongly as not blurred.