14

I am totally new to OpenCV and I have started to dive into it. But I'd need a little bit of help.

So I want to combine these 2 images:

enter image description here

I would like the 2 images to match along their edges (ignoring the very right part of the image for now)

Can anyone please point me into the right direction? I have tried using the findTransformECC function. Here's my implementation:

cv::Mat im1 = [imageArray[1] CVMat3];
cv::Mat im2 = [imageArray[0] CVMat3];

// Convert images to gray scale;
cv::Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);

// Define the motion model
const int warp_mode = cv::MOTION_AFFINE;

// Set a 2x3 or 3x3 warp matrix depending on the motion model.
cv::Mat warp_matrix;

// Initialize the matrix to identity
if ( warp_mode == cv::MOTION_HOMOGRAPHY )
    warp_matrix = cv::Mat::eye(3, 3, CV_32F);
else
    warp_matrix = cv::Mat::eye(2, 3, CV_32F);

// Specify the number of iterations.
int number_of_iterations = 50;

// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;

// Define termination criteria
cv::TermCriteria criteria (cv::TermCriteria::COUNT+cv::TermCriteria::EPS,   number_of_iterations, termination_eps);

// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
                 im1_gray,
                 im2_gray,
                 warp_matrix,
                 warp_mode,
                 criteria
                 );

// Storage for warped image.
cv::Mat im2_aligned;

if (warp_mode != cv::MOTION_HOMOGRAPHY)
    // Use warpAffine for Translation, Euclidean and Affine
    warpAffine(im2, im2_aligned, warp_matrix, im1.size(), cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
else
    // Use warpPerspective for Homography
    warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);


UIImage* result =  [UIImage imageWithCVMat:im2_aligned];
return result;

I have tried playing around with the termination_eps and number_of_iterations and increased/decreased those values, but they didn't really make a big difference.

So here's the result:

enter image description here

What can I do to improve my result?

EDIT: I have marked the problematic edges with red circles. The goal is to warp the bottom image and make it match with the lines from the image above:

enter image description here

I did a little bit of research and I'm afraid the findTransformECC function won't give me the result I'd like to have :-(

Something important to add: I actually have an array of those image "stripes", 8 in this case, they all look similar to the images shown here and they all need to be processed to match the line. I have tried experimenting with the stitch function of OpenCV, but the results were horrible.

EDIT:

Here are the 3 source images:

1

2

3

The result should be something like this:

result

I transformed every image along the lines that should match. Lines that are too far away from each other can be ignored (the shadow and the piece of road on the right portion of the image)

gasparuff
  • 2,295
  • 29
  • 48
  • Can you show us an example of how you want your result to look like? – Abdulrahman Alhadhrami Jan 12 '17 at 09:37
  • @alhadhrami Of course. I have added some details to the question. – gasparuff Jan 12 '17 at 18:00
  • you misunderstood me. I was asking to see an example of a correct output, an output that you would like you code to create. @gasparuff – Abdulrahman Alhadhrami Jan 15 '17 at 07:07
  • In addition, can you provide the two images so that I can run tests of my own? – Abdulrahman Alhadhrami Jan 15 '17 at 07:09
  • Hi @alhadhrami. Sorry for my late response, lots of things were going on. I'll update my question tonight and add the information you asked for. Thank you – gasparuff Jan 19 '17 at 14:53
  • Question is updated :-) – gasparuff Jan 19 '17 at 18:14
  • in your desired result, are you sure about the discontinuities at that blue "spikes" near the middle of the images and the parts on the far-right? It looks like you want "some" parts to be continuous and others not, is there any rule behind this? – Micka Jan 26 '17 at 10:30
  • You could try a Hough Algorithm for deskewing pictures to find "irritations" in your aligned images and repeat that algorithm until all your desired lines are ... well lined up :-) – Flocke Jan 26 '17 at 18:50
  • @Micka yes, those can be ignored. If they exceed a certain distance, they shouldn't have any impact on the transformation matrix. – gasparuff Jan 27 '17 at 20:36
  • @Flocke Yeah, I stumbled across the hough algorithm yesterday and started thinking about how I could use that to solve my problem. Unfortunately I have no clue how to do that – gasparuff Jan 27 '17 at 20:39

2 Answers2

5

By your images, it seems that they overlap. Since you said the stitch function didn't get you the desired results, implement your own stitching. I'm trying to do something close to that too. Here is a tutorial on how to implement it in c++: https://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/

  • Thanks for that input. Indeed I have checked that out but the results aren't really useful yet. I'm still figuring out. – gasparuff Jan 25 '17 at 19:42
3

You can use Hough algorithm with high threshold on two images and then compare the vertical lines on both of them - most of them should be shifted a bit, but keep the angle.

This is what I've got from running this algorithm on one of the pictures:

Houghlines algorithm on first example

Filtering out horizontal lines should be easy(as they are represented as Vec4i), and then you can align the remaining lines together.

Here is the example of using it in OpenCV's documentation.

UPDATE: another thought. Aligning the lines together can be done with the concept similar to how cross-correlation function works. Doesn't matter if picture 1 has 10 lines, and picture 2 has 100 lines, position of shift with most lines aligned(which is, mostly, the maximum for CCF) should be pretty close to the answer, though this might require some tweaking - for example giving weight to every line based on its length, angle, etc. Computer vision never has a direct way, huh :)

UPDATE 2: I actually wonder if taking bottom pixels line of top image as an array 1 and top pixels line of bottom image as array 2 and running general CCF over them, then using its maximum as shift could work too... But I think it would be a known method if it worked good.

Leontyev Georgiy
  • 1,295
  • 11
  • 24
  • Thanks for your valuable input. Although I don't really know how to put this theory into code. Your UPDATE 2 is something I have been thinking about as well. But I need to find out how CCF works. – gasparuff Jan 27 '17 at 21:09
  • @gasparuff CCF(A, B) == REV_FFT(FFT(A)*CONJ(FFT(B))), where A, B - input vectors of power of 2 length, fill remainders with zeroes, FFT is fast Fourier transform, and REV_FFT is a reverse one, CONJ is complex conjugate. A and B's imaginary parts are zeroes initially, real parts are greyscale values of corresponding pixel lines. CCF's result is complex too, gotta transform it to (re^2)+(im^2) for every element. – Leontyev Georgiy Jan 27 '17 at 22:02
  • Maximum element of result will deliver the shift you need. – Leontyev Georgiy Jan 27 '17 at 22:04
  • Can you show me an example of how to implement that? I'm still looking for an answer :-( – gasparuff Feb 01 '17 at 15:32