20

I am using SURF descriptors for image matching. I am planning to match a given image to a database of images.

import cv2
import numpy as np
surf = cv2.xfeatures2d.SURF_create(400)

img1 = cv2.imread('box.png',0)
img2 = cv2.imread('box_in_scene.png',0)

kp1,des1 = surf.detectAndCompute(img1,None)
kp2,des2 = surf.detectAndCompute(img2,None)


bf = cv2.BFMatcher(cv2.NORM_L1,crossCheck=True)
#I am planning to add more descriptors
bf.add(des1)

bf.train()

#This is my test descriptor
bf.match(des2)

The issue is with bf.match is that I am getting the following error:

OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) in batchDistance, file /build/opencv/src/opencv-3.1.0/modules/core/src/stat.cpp, line 3749
Traceback (most recent call last):
  File "image_match4.py", line 16, in <module>
    bf.match(des2)
cv2.error: /build/opencv/src/opencv-3.1.0/modules/core/src/stat.cpp:3749: error: (-215) type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U) in function batchDistance

The error is similar to this post. The explanation given is incomplete and inadequate.I want to know how to resolve this issue. I have used ORB descriptors as well with BFMatcher having NORM_HAMMING distance. The error resurfaces. Any help will be appreciated.

The two images that I have used for this are:

box.png

box.png

box_in_scene.png

box_in_scene.png

I am using Python 3.5.2 and OpenCV 3.1.x in linux.

Community
  • 1
  • 1
motiur
  • 1,640
  • 9
  • 33
  • 61

5 Answers5

13

To search between descriptors of two images use:

img1 = cv2.imread('box.png',0)
img2 = cv2.imread('box_in_scene.png',0)

kp1,des1 = surf.detectAndCompute(img1,None)
kp2,des2 = surf.detectAndCompute(img2,None)


bf = cv2.BFMatcher(cv2.NORM_L1,crossCheck=False)
matches = bf.match(des1,des2)

To search among multiple images

The add method is used to add descriptor of multiple test images. Once, all descriptors are indexed, you run train method to build an underlying Data Structure(example: KdTree which will be used for searching in case of FlannBasedMatcher). You can then run match to find if which test image is a closer match to which query image. You can check K-d_tree and see how it can be used to search for multidimensional vectors(Surf gives 64-dimensional vector).

Note:- BruteForceMatcher, as name implies, has no internal search optimizing data structure and thus has empty train method.

Code Sample for Multiple Image search

import cv2
import numpy as np
surf = cv2.xfeatures2d.SURF_create(400)

# Read Images
train = cv2.imread('box.png',0)
test = cv2.imread('box_in_scene.png',0)

# Find Descriptors    
kp1,trainDes1 = surf.detectAndCompute(train, None)
kp2,testDes2  = surf.detectAndCompute(test, None)

# Create BFMatcher and add cluster of training images. One for now.
bf = cv2.BFMatcher(cv2.NORM_L1,crossCheck=False) # crossCheck not supported by BFMatcher
clusters = np.array([trainDes1])
bf.add(clusters)

# Train: Does nothing for BruteForceMatcher though.
bf.train()

matches = bf.match(testDes2)
matches = sorted(matches, key = lambda x:x.distance)

# Since, we have index of only one training image, 
# all matches will have imgIdx set to 0.
for i in range(len(matches)):
    print matches[i].imgIdx

For DMatch output of bf.match, see docs.

See full example for this here: Opencv3.0 docs.

Other Info

OS: Mac.
Python: 2.7.10.
Opencv: 3.0.0-dev [If remember correctly, installed using brew].

saurabheights
  • 3,967
  • 2
  • 31
  • 50
  • I am using it for multiple images. The above code is the simplest version. The example code you have given works fine for two images. I want to compare descriptor of one image to a list of descriptor of multiple images. The problem happens there. – motiur Oct 09 '16 at 17:05
  • Sorry, i missed that. Give me sometime to see the issue. – saurabheights Oct 09 '16 at 17:10
  • I think I am asking this similar question: http://stackoverflow.com/questions/37731908/opencv2-batchdistance-error-215-when-looping-through-images-while-individual-co?rq=1 – motiur Oct 09 '16 at 17:34
  • Hi, I am facing the same error. Everything seems fine. By the way, BruteForceMatcher is not really happening, as it does no training. http://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html#descriptormatcher-train – saurabheights Oct 09 '16 at 17:39
  • I also tried FlannMatcher(since my project used FlannMatcher/C++), but faced this opencv bug: https://github.com/opencv/opencv/issues/5667 – saurabheights Oct 09 '16 at 17:39
  • I have tried FlannMatcher too - its just a mess with that. – motiur Oct 09 '16 at 17:41
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/125292/discussion-between-saurabheights-and-motiur). – saurabheights Oct 09 '16 at 17:41
  • I think it still has the same issues when descriptors of multiple images are added as training set. – motiur Oct 09 '16 at 20:00
6

I found I was getting the same error. Took a while to figure out - some of my images were somewhat featureless, therefore no keypoints were found, and detectAndCompute returned None for the descriptors. Might be worth checking the list of descriptors for None elements prior to passing to BFMatcher.add().

Georgy
  • 12,464
  • 7
  • 65
  • 73
eldorz
  • 145
  • 1
  • 8
6

I was getting the same error. But in my case it was because I was using SIFT with cv2.NORM_HAMMING metric in cv2.BFMatcher_create. Changing the metric to cv2.NORM_L1 solved the issue.

Citing docs for BFMatcher:

normType – One of NORM_L1, NORM_L2, NORM_HAMMING, NORM_HAMMING2. L1 and L2 norms are preferable choices for SIFT and SURF descriptors, NORM_HAMMING should be used with ORB, BRISK and BRIEF, NORM_HAMMING2 should be used with ORB when WTA_K==3 or 4 (see ORB::ORB constructor description).

Georgy
  • 12,464
  • 7
  • 65
  • 73
6

Edit: Versions used Python 3.6, OpenCV 3.4.1

I struggled a lot while preparing a program that uses SIFT or ORB depending on user's choice. Finally, i could find correct parameters for BFMatcher for SIFT and ORB

import cv2
import numpy as np

# ask user whether to use SIFT or ORB
detect_by = input("sift or orb")
  1. Creating matcher object

    if detect_by == "sift":
        matcher = cv2.BFMatcher(normType=cv2.NORM_L2, crossCheck=False)
    
    elif detect_by is "orb":
        matcher = cv2.BFMatcher(normType=cv2.NORM_HAMMING, crossCheck=False)
    
  2. While capturing and processing frames

    while there_is_frame_to_process:
        if detect_by is "sift":
            matches = matcher.knnMatch(np.asarray(gray_des, np.float32), np.asarray(target_des, np.float32), k=2)
    
        elif detect_by is "orb":
            matches = matcher.knnMatch(np.asarray(gray_des, np.uint8), np.asarray(target_des, np.uint8), k=2)
    
Ali Eren Çelik
  • 239
  • 4
  • 4
0

In my case using ORB, the issue was it could not find the features of the frame and checking if its empty did it.

qImageKeypoints, qImageDescriptors = orb.detectAndCompute(query_img_bw, None)
trainKeypoints, trainDescriptors = orb.detectAndCompute(train_img_bw, None)

if trainDescriptors is None:
    return False
else:
    # check some matching of the two images
    matcher = cv2.BFMatcher(cv2.NORM_HAMMING,crossCheck=False)
    matches = matcher.match(qImageDescriptors, trainDescriptors)
wanjiku
  • 251
  • 6
  • 14