0

I am trying to detect is there any shift in x or y direction between 2 images, one of the images is reference image and the other one is live image coming from camera.

Idea is to use ORB detector to extract keypoints in 2 images and then use BFFMatcher to find good matches. After that do further analysis by checking if good matches are matching coordinates of keypoints in image1 and image2, if they match then we are assuming that there is no any shift. If there is offeset by in x direction 3px for example in all set of good matches then image is shifted by 3px (maybe there is better way of doing it(?)).

Up to now I am able to get keypoints between 2 images, however I am not sure how to check coordinates of those good matches in image1 and image2.

import cv2
import numpy as np
import matplotlib.pyplot as plt
import os.path
import helpers

referenceImage = None
liveImage = None
lowe_ration= 0.75

orb = cv2.ORB_create()
bfMatcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = False)
cap = cv2.VideoCapture(1)


def compareUsingOrb():
    kp1, des1 = orb.detectAndCompute(liveImage, None)
    print("For Live Image it detecting %d keypoints" % len(kp1))
    matches = bfMatcher.knnMatch(des1, des2, k=2)
    goodMatches=[]
    for m,n in matches:
        if(m.distance < lowe_ration*n.distance):
            goodMatches.append([m])         
    #Check good matches x, y coordinates    
    img4 = cv2.drawKeypoints(referenceImage, kp2, outImage=np.array([]), color=(0,0,255))
    cv2.imshow("Keypoints in reference image", img4)       
    img3 = cv2.drawMatchesKnn(liveImage, kp1, referenceImage, kp2, goodMatches, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
    print("Found %d good matches" % (len(goodMatches)))
    cv2.imshow("Matches", img3)

if(helpers.doesFileExist() == False):
    ret, frame = cap.read()
    cv2.imwrite('referenceImage.png', frame)
    referenceImage = cv2.imread('referenceImage.png')
    kp2, des2 = orb.detectAndCompute(referenceImage, None)    
    print("For Reference Image it detecting %d keypoints" % len(kp2))
else:
    referenceImage = cv2.imread('referenceImage.png')
    kp2, des2 = orb.detectAndCompute(referenceImage, None)    
    print("For Reference Image it detecting %d keypoints" % len(kp2))

while(True & helpers.doesFileExist()):
    ret, liveImage = cap.read()
    cv2.imshow("LiveImage", liveImage)
    compareUsingOrb()
    if(cv2.waitKey(1) & 0xFF == ord('q')):
        break
cap.release()
cv2.destroyAllWindows()

The goal is detect if there is a shift between 2 images and if there is - then attempt to align images and do image comparison. Any tips how to achieve this using OpenCV would be appreciated.

thug_
  • 893
  • 2
  • 11
  • 31

1 Answers1

0

Basically, you want to know How to get pixel coordinates from Feature Matching in OpenCV Python. Then you need some way to filter outliers. If only differnce between your images is translation (shift) on live image, this should be straightforward. But I'd suspect your live image might also be affected by rotation, or 3D transformation to some extent. If ORB finds enough features, finding right transformation using OpenCV isn't hard.

zteffi
  • 660
  • 5
  • 18
  • "If only differnce between your images is translation (shift) on live image, this should be straightforward." Could you please elaborate? :( – Brambor Sep 10 '20 at 19:53
  • I meant that unless they work with 2D graphics, objects will never simply shift, but you have to take other 3D transformations and camera projection into account. – zteffi Sep 14 '20 at 07:56
  • Ok, So I Actually did it using two solutions. This regular one (3D) did improve and seem to shift it right, however there was a small blurr. Then I finally found `from scipy.ndimage import shift` for sub pixel precision shifting and `from skimage.feature import register_translation` for sub pixel shift detection. The results were amazing, better than by hand! – Brambor Sep 15 '20 at 09:12
  • Btw I wasn't asking to elaborate on the problem but on the solution. – Brambor Sep 15 '20 at 09:14