I want to implement BRISK
using Python and OpenCV for Feature Detection and Description in drone images.
Since BRISK
is also a descriptor, I want to use its description features to match two images.
How do I do it?
I want to implement BRISK
using Python and OpenCV for Feature Detection and Description in drone images.
Since BRISK
is also a descriptor, I want to use its description features to match two images.
How do I do it?
You can perform Feature Detection and Description with the Local Binary Descriptor BRISK
, and then, use Brute Force
or FLANN
algorithms to do Feature Matching using Python and OpenCV.
In this example, I will show you Feature Detection and Matching with BRISK
through the Brute Force
algorithm.
First, load the input image and the image that will be used for training.
In this example, we are using those images:
image1
:
image2
:
# Imports
import cv2 as cv
import matplotlib.pyplot as plt
# Open and convert the input and training-set image from BGR to GRAYSCALE
image1 = cv.imread(filename = 'image1.jpg',
flags = cv.IMREAD_GRAYSCALE)
image2 = cv.imread(filename = 'image2.jpg',
flags = cv.IMREAD_GRAYSCALE)
Note that when importing the images, we use the flags = cv.IMREAD_GRAYSCALE
parameter, because in OpenCV the default color mode setting is BGR. Therefore, to work with Descriptors, we need to convert the color mode pattern from BGR to grayscale.
Now we will use the BRISK
algorithm:
# Initiate BRISK descriptor
BRISK = cv.BRISK_create()
# Find the keypoints and compute the descriptors for input and training-set image
keypoints1, descriptors1 = BRISK.detectAndCompute(image1, None)
keypoints2, descriptors2 = BRISK.detectAndCompute(image2, None)
The features detected by the BRISK
algorithm can be combined to find objects or patterns that are similar between different images.
Now we will use the Brute Force
algorithm:
# create BFMatcher object
BFMatcher = cv.BFMatcher(normType = cv.NORM_HAMMING,
crossCheck = True)
# Matching descriptor vectors using Brute Force Matcher
matches = BFMatcher.match(queryDescriptors = descriptors1,
trainDescriptors = descriptors2)
# Sort them in the order of their distance
matches = sorted(matches, key = lambda x: x.distance)
# Draw first 15 matches
output = cv.drawMatches(img1 = image1,
keypoints1 = keypoints1,
img2 = image2,
keypoints2 = keypoints2,
matches1to2 = matches[:15],
outImg = None,
flags = cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.imshow(output)
plt.show()
And the output will be:
This technique is widely used in image recovery applications, motion tracking, object detection, recognition and tracking, 3D object reconstruction, among others. And you can easily modify the way you load the images. In this way, this technique can easily be applied to your problem.
To learn more about Detection, Description, and Feature Matching techniques, Local Feature Descriptors, Local Binary Descriptors, and algorithms for Feature Matching, I recommend the following repositories on GitHub: