9

I have two images that are similar but differ in orientation and size. One example can be seen below:

enter image description here enter image description here

Is there a way to match the two images?

I have used Procrustes shape analysis, but are there other ways?

rayryeng
  • 102,964
  • 22
  • 184
  • 193
  • You need to use a log polar comparaison... no idea if it's already implemented in matlab or in a matlab toolbox... – Alexxx Mar 18 '15 at 16:21
  • 2
    I would use image registration. Find a set of keypoints that match between the two images, then find a homography / transformation matrix that aligns the two together. BTW, before I write an answer, do you have the Computer Vision Toolbox? – rayryeng Mar 18 '15 at 16:24
  • 1
    This might give you some ideas: [Algorithm improvement for Coca-Cola can shape recognition](http://stackoverflow.com/q/10168686/2545927) – kkuilla Mar 18 '15 at 21:18
  • 1
    @kkuilla - Great link. I've favourited that one for quite some time. – rayryeng Mar 18 '15 at 21:25

4 Answers4

18

Here's something to get you started. What you are asking for is a classic problem known as image registration. Image registration seeks to find the correct homography that takes one image and aligns it with another. This involves finding interest or keypoints that are common between the two images and determining which keypoints match up between the two images. Once you have these pairs of points, you determine a homography matrix and warp one of the images so that they are aligned with the other with this said matrix.

I'm going to assume that you have the Computer Vision and Image Processing Toolboxes that are part of MATLAB. If you don't, then the answer that Maurits gave is a good alternative, and the VLFeat Toolbox is one that I have also used.

First, let's read in the images directly from StackOverflow:

im = imread('https://i.stack.imgur.com/vXqe8.png');
im2 = imread('https://i.stack.imgur.com/Pd7pt.png');

im_gray = rgb2gray(im);
im2_gray = rgb2gray(im2);

We also need to convert to grayscale as the keypoint detection algorithms require a grayscale image. Next, we can use any feature detection algorithm that's part of MATLAB's CVST.... I'm going to use SURF as it is essentially the same as SIFT, but with some minor but key differences. You can use the detectSURFFeatures function that's part of the CVST toolbox and it accepts grayscale images. The output is a structure that contains a bunch of information about each feature point the algorithm detected for the image. Let's apply that to both of the images (in grayscale).

points = detectSURFFeatures(im_gray);
points2 = detectSURFFeatures(im2_gray);

Once we detect the features, it's now time to extract the descriptors that describe these keypoints. That can be done with extractFeatures. This takes in a grayscale image and the corresponding structure that was output from detectSURFFeatures. The output is a set of features and valid keypoints after some post-processing.

[features1, validPoints1] = extractFeatures(im_gray, points);
[features2, validPoints2] = extractFeatures(im2_gray, points2);

Now it's time to match the features between the two images. That can be done with matchFeatures and it takes in the features between the two images:

indexPairs = matchFeatures(features1, features2);

indexPairs is a 2D array where the first column tells you which feature point from the first image matched with those from the second image, stored in the second column. We would use this to index into our valid points to flesh out what actually matches.

matchedPoints1 = validPoints1(indexPairs(:, 1), :);
matchedPoints2 = validPoints2(indexPairs(:, 2), :);

We can then show which points matched by using showMatchedFeatures like so. We can put both images side by side each other and draw lines between the matching keypoints to see which matched.

figure;
showMatchedFeatures(im, im2, matchedPoints1, matchedPoints2, 'montage');

This is what I get:

enter image description here

It isn't perfect, but it most certainly finds consistent matches between the two images.

Now what we need to do next is find the homography matrix and warp the images. I'm going to use estimateGeometricTransform so that we can find a transformation that warps one set of points to another. As Dima noted in his comments to me below, this robustly determines the best homography matrix via RANSAC. We can call estimateGeometricTransform like so:

tform = estimateGeometricTransform(matchedPoints1.Location,...
     matchedPoints2.Location, 'projective');

The first input takes in a set of input points, which are the points that you to transform. The second input takes in a set of base points which are the reference points. These points are what we want to match up to.

In our case, we want to warp the points from the first image - the person standing up and make it match the second image - the person leaning on his side, and so the first input is the points from the first image, and the second input is the points from the second image.

For the matched points, we want to reference the Location field because these contain the coordinates of where the actual points matched between the two images. We also use projective to account for scale, shearing and rotation. The output is a structure that contains our transformation of points.

What we will do next is use imwarp to warp the first image so that it aligns up with the second.

out = imwarp(im, tform);

out will contain our warped image. If we showed the second image and this output image side by side:

figure;
subplot(1,2,1);
imshow(im2);
subplot(1,2,2);
imshow(out);

This is what we get:

enter image description here

I'd say that's pretty good, don't you think?


For your copying and pasting pleasure, here's what the full code looks like:

im = imread('https://i.stack.imgur.com/vXqe8.png');
im2 = imread('https://i.stack.imgur.com/Pd7pt.png');

im_gray = rgb2gray(im);
im2_gray = rgb2gray(im2);

points = detectSURFFeatures(im_gray);
points2 = detectSURFFeatures(im2_gray);

[features1, validPoints1] = extractFeatures(im_gray, points);
[features2, validPoints2] = extractFeatures(im2_gray, points2);

indexPairs = matchFeatures(features1, features2);

matchedPoints1 = validPoints1(indexPairs(:, 1), :);
matchedPoints2 = validPoints2(indexPairs(:, 2), :);

figure;
showMatchedFeatures(im, im2, matchedPoints1, matchedPoints2, 'montage');

tform = estimateGeometricTransform(matchedPoints1.Location,...
   matchedPoints2.Location, 'projective');

out = imwarp(im, tform);

figure;
subplot(1,2,1);
imshow(im2);
subplot(1,2,2);
imshow(out);

Aside

Bear in mind that I used the default parameters for everything... so detectSURFFeatures, matchFeatures, etc. You may have to play around with the parameters to get consistent results across different pairs of images you try. I'll leave that to you as an exercise. Take a look at all of the links I've linked above with regards to each of the functions so that you can play around with the parameters to suit your tastes.


Have fun, and good luck!

rayryeng
  • 102,964
  • 22
  • 184
  • 193
  • `cp2tform` has been deprecated. Use `fitgeotform` to do linear least-squares fit, or `estimateGeometricTrasnform` to do robust fit using RANSAC. Also, `imtransform` has been deprecated in favor of `imwarp`. – Dima Mar 18 '15 at 17:19
  • @Dima - `fitgeotform` I don't have, which is why I used `cp2tform`. I'm using MATLAB R2013a. If I had a more recent version of MATLAB, I would most certainly use `fitgeotform`. I do have `imwarp` though. FWIW, thank you very much for the tip. – rayryeng Mar 18 '15 at 17:21
  • @Dima - Nice. Looks like I have `estimateGeometricTransform`. I'm going to modify my answer. – rayryeng Mar 18 '15 at 17:24
  • 1
    for feature matching `estimateGeometricTransform` is better, because it is robust to outliers. If you have an older release, you can use the `vision.GeometricTransformEstimator` object. – Dima Mar 18 '15 at 17:24
  • @Dima - Yup. I'll experiment and modify my answer accordingly. Thanks! – rayryeng Mar 18 '15 at 17:25
  • Correction: it's `fitgeotrans`. – Dima Mar 18 '15 at 17:39
  • @Dima Спасибо! I still don't have that function though :(. – rayryeng Mar 18 '15 at 17:40
  • 3
    @rayryeng +1 here is one similar using OpenCV in MATLAB (with [mexopencv](https://github.com/kyamagu/mexopencv/) as wrapper): http://stackoverflow.com/a/20481266/97160 – Amro Jul 15 '15 at 20:07
  • @Amro - Great way to showcase what `mexopencv` does. +1 for you as well. – rayryeng Jul 15 '15 at 20:15
6

Check out Find Image Rotation and Scale Using Automated Feature Matching example in the Computer Vision System Toolbox.

enter image description here

It shows how to detect interest points, extract and match features descriptors, and compute the transformation between the two images.

Dima
  • 38,860
  • 14
  • 75
  • 115
  • 1
    @rayryeng, there are a lot of them. You should check them out :) http://www.mathworks.com/help/vision/examples.html – Dima Mar 18 '15 at 17:34
3

You can get a reasonable result doing the following:

  • Detect SIFT points in both images. For example with the vlfeat lib.
  • Match SIFT descriptors using Lowe's rule, implemented by vlfeat with vl_ubcmatch
  • Use RANSAC to find a subset of matching points that adhere to some homography H. An example using Harris feature points can be seen on the site of Peter Kovesi, next to RANSAC itself and other useful functions.

The initial result:

enter image description here

Maurits
  • 2,082
  • 3
  • 28
  • 32
0

This is rather a different approach than the rest. Feature based registration methods are more robust, but my approach may just be useful for you application, so I'm writing it down here.

  • Load the reference image (I'll call it model) and the image you want to compare with the model
  • Calculate the histograms of the two images
  • Compare the two histograms. There are many ways of doing this. Here I'll be using intersection and correlation
  • Histogram Intersection : Calculate the histogram intersection and normalize it by dividing it by the number of pixels in model histogram. This will give you a value between 0 and 1.

    image = imread('Pd7pt.png');
    model = imread('vXqe8.png');
    grImage = rgb2gray(image);
    grModel = rgb2gray(model);
    hImage = imhist(grImage);
    hModel = imhist(grModel);
    
    normhistInterMeasure = sum(min(hImage, hModel))/sum(hModel)
    corrMeasure = corr2(hImage, hModel)
    
  • For intersection and correlation I get 0.2492 and 0.9999 respectively.

dhanushka
  • 10,492
  • 2
  • 37
  • 47