I am working for image comparison which are 99% same amd 1% difference. I am capturing Image of the print using Vision camera (mounted over a fix stand). I tried all Image comparison algorithm : opencv, ImageMagic, skimage. (result was 80 to 90 percent accuracy)
link : “Diff” an image using ImageMagick
link : How can I quantify difference between two images?
I implemented all the above solution from the above question to find the difference, but the problem with above algorithm is, they work pixel to pixel. non of this algorithm provided a smarter approach for image comparison.
After capturing image of two different prints of same type I do the following steps for image comparison:
my code for overlapping misplaced images for the maximum similarity is :
code for image aliment :
import cv2
import numpy as np
# load image
img = cv2.imread('./photo/image.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert to grayscale
retval, thresh_gray = cv2.threshold(gray, 100, maxval=255, type=cv2.THRESH_BINARY_INV)
contours, hierarchy = cv2.findContours(thresh_gray,cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
mx_rect = (0,0,0,0)
mx_area = 0
for cnt in contours:
arect = cv2.minAreaRect(cnt)
area = arect[1][0]*arect[1][1]
if area > mx_area:
mx_rect, mx_area = arect, area
x,y,w,h = cv2.boundingRect(cnt)
# cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),8)
roi_1 = img[y:y+h, x:x+w]
cv2.imwrite('./test/Image_rec.jpg', roi_1)
print("shape of cnt: {}".format(cnt.shape))
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
width = int(rect[1][0])
height = int(rect[1][1])
src_pts = box.astype("float32")
dst_pts = np.array([[0, height-1],
[0, 0],
[width-1, 0],
[width-1, height-1]], dtype="float32")
M = cv2.getPerspectiveTransform(src_pts, dst_pts)
warped = cv2.warpPerspective(img, M, (width, height))
cv2.imwrite('./crop_Image_rortate.jpg', warped)
Above code gives the required image i.e. it tries to alien image, it crop the required image, but some time it fails as well (2/10, 2 out of 10 fails).
Once the image is crop, I compare it to find the difference using clustering techniques. my code for comparison is as follow :
from PIL import Image
import numpy as np
import cv2
import scipy.misc as smp
f1= './Image_1.png'
f2= './Image_2.png'
im1 = Image.open(f1)
im2 = Image.open(f2)
img1= cv2.imread(f1)
img2= cv2.imread(f2)
# print (img1.shape)
# print (img2.shape)
w_1= img1.shape[0]
h_1= img1.shape[1]
W_1 = w_1-1
H_1 = h_1-1
c = 0
X=[]
Y=[]
R=[]
G=[]
B=[]
rgb = im1.convert('RGB')
rgb2 = im2.convert('RGB')
for x in range(H_1):
for y in range(W_1):
r1, g1, b1, = rgb.getpixel((x,y))
t1= r1+g1+b1
i = x
j = y
r2, g2, b2, = rgb2.getpixel((i,j))
t2=r2+g2+b2
d= t1-t2
if d in range (-150,150):
# print (d)
pass
else:
c = c + 1
if (c == 1):
z=y
elif (y == z+1 ):
# print (x,y)
i = x+1
j = y+1
r2, g2, b2, = rgb2.getpixel((i,j))
t2=r2+g2+b2
d= t1-t2
if d in range (-150,150):
# print (d)
pass
else:
X.append(x)
Y.append(y)
R.append(r1)
G.append(g1)
B.append(b1)
z=y
z1=y # to make group of 2.
try:
data = np.zeros( (h_1,w_1,3), dtype=np.uint8 )
length = len(X)
print ("total pixel difference : ",length)
for i in range(length):
data[X[i],Y[i]] = [R[i],G[i],B[i]]
img = Image.fromarray( data, 'RGB' )
img.save('./test/new.png')
img.show()
except:
print ("Error during image creation. ")
above code tries to implement clustering base image comparison and it is also slow. comparison skips the first pixel as difference even it is a difference for every row. it will look for major difference only.
But still the problem remains same, pixel to pixel comparison.
Is there a proper clustering technique which will target the proper difference.
I don't want to do pixel to pixel image comparison, as it provide incorrect result for me.
I am also open to other techniques for image comparison, If available and not listed.
Image Sample :
Image 1: Image 1
Image 2: Image 2
Accepted Output: Accepted output
output after difference :
Thanks.