0

Here's what I would like to do:

I have two similar images. The images can be different in position. So I used surf feature detector. And matched those features from two images and obtained transformation matrix. And I warped first image with that transformation matrix. And in the result there is minor shift from second image. So I can't use subtracting method to find differences. How can I detect differences and show it by drawing circle around the differences?

I'm now working using matlab and python.

Here is my matlab code.

%% Step 1: Read Images
% Read the reference image containing the object of interest.
oimg1 = imread('test3_im1.jpg');
img1 = imresize(rgb2gray(oimg1),0.2);
figure;
imshow(img1);
title('First Image');

%%
% Read the target image containing a cluttered scene.
oimg2 = imread('test3_im2.jpg');
img2 = imresize(rgb2gray(oimg2),0.2);
figure; 
imshow(img2);
title('Second Image');

%% Step 2: Detect Feature Points
% Detect feature points in both images.
points1 = detectSURFFeatures(img1);
points2 = detectSURFFeatures(img2);

%% 
% Visualize the strongest feature points found in the reference image.
figure; 
imshow(img1);
title('500 Strongest Feature Points from Box Image');
hold on;
plot(selectStrongest(points1, 500));

%% 
% Visualize the strongest feature points found in the target image.
figure; 
imshow(img2);
title('500 Strongest Feature Points from Scene Image');
hold on;
plot(selectStrongest(points2, 500));

%% Step 3: Extract Feature Descriptors
% Extract feature descriptors at the interest points in both images.
[features1, points1] = extractFeatures(img1, points1);
[features2, points2] = extractFeatures(img2, points2);

%% Step 4: Find Putative Point Matches
% Match the features using their descriptors. 
pairs = matchFeatures(features1, features2);

%% 
% Display putatively matched features. 
matchedPoints1 = points1(pairs(:, 1), :);
matchedPoints2 = points2(pairs(:, 2), :);
figure;
showMatchedFeatures(img1, img2, matchedPoints1, matchedPoints2, 'montage');
title('Putatively Matched Points (Including Outliers)');

%% Step 5: Locate the Object in the Scene Using Putative Matches
% |estimateGeometricTransform| calculates the transformation relating the
% matched points, while eliminating outliers. This transformation allows us
% to localize the object in the scene.
[tform, inlierPoints1, inlierPoints2] = ...
    estimateGeometricTransform(matchedPoints1, matchedPoints2, 'affine');
% tform_m = cp2tform(inlierPoints1,inlierPoints2,'piecewise linear');
% TFORM = cp2tform(movingPoints,fixedPoints,'piecewise linear')
%%
% Display the matching point pairs with the outliers removed
showMatchedFeatures(img1, img2, inlierPoints1, inlierPoints2, 'montage');
title('Matched Points (Inliers Only)');

%% detect difference
imgw = imwarp(oimg1, tform);
gim1 = rgb2gray(imgw);
gim2 = rgb2gray(oimg2);
sub = abs(gim1 - gim2);
imshow(sub);
discover
  • 411
  • 1
  • 6
  • 16
  • You need to put some code to show what u have done – Dark Matter Mar 16 '16 at 06:46
  • Can we see the source and difference images ? There can be different reasons. –  Mar 16 '16 at 08:36
  • cant you first register the images - using matlab. to do registration you need to select some keypoints with matched correspondence on the two images. Then I believe you can use RANSAC algorithm in matlab to get the transformation matrix which will help you to recover the original image. Then follow JCKaz's advise. – roni Mar 17 '16 at 08:48
  • you can follow something like i did in http://stackoverflow.com/questions/35909833/align-already-captured-rgb-and-depth-images which basically registers a rgb and depth image. – roni Mar 17 '16 at 08:49

2 Answers2

0

Match the position, then run:

I1 = imread('image1.jpg');
I2 = imread('image2.jpg');
Idif = uint8(abs(double(I1)-double(I2)))-40;
Idif = uint8(20*Idif);
imshow(Idif)   
hold on
himage = imshow(I1);
set(himage, 'AlphaData', 0.4);

Then just add the circles if necessary. This code will find and highlight the differences. I hope it helps.

jkazan
  • 1,149
  • 12
  • 29
  • Thanks, JCKaz. In my case , the locations of two images are not same. So I cannot use just subtracting method. The second image has shifted from first image. – discover Mar 16 '16 at 12:51
  • Is it not possible for you to first match/sync the position? – jkazan Mar 16 '16 at 13:06
  • That is the problem I have. Do you have any idea? – discover Mar 16 '16 at 13:16
  • Could you make your files available to me? Either post a link or send to: johannes.kazantzidis13@alumni.imperial.ac.uk – jkazan Mar 16 '16 at 13:37
  • Sorry. The images are private. You can imagine yourself like that. – discover Mar 16 '16 at 17:10
  • Of course, I understand. Must the solution be fully automated or could you edit the images prior to running the code? That way you could match the position in e.g. photoshop. I you like, you could take two arbitrary images and send so that I can try to solve it. That way I won't need to aks 20 questoins of the disposition, e.g. if the number of pixels are the same but the images or dispositioned or if the size of the images are'n matching, etc. – jkazan Mar 17 '16 at 06:39
0

I'm not entirely sure if it's going to solve your problem but you might want to think about using:

Template Matching

, from Scikit-Image to locate the apparent sub-set. It seems from your description that you have already done something like this but still have some type of positional difference. If we are talking about small differences consider giving a tolerance and test all average differences in a window. Let's say your sub-set is in position i,j. Testing all average differences in a window [i-10,i+10],[y-10,y+10] will give you one exact position where that number is smaller and odds say that would be your correct position (notice however that this might be computer intensive). From this point just do as yourself suggested to contrast the differences.

armatita
  • 12,825
  • 8
  • 48
  • 49