I'm working on a project of image stitching using OpenCV 2.3.1 on Visual Studio 2010.
I'm currently having 2 problems.
(My reputation is not over 10 so I can only post 2 hyperlinks in this post. I'll post another 2 in the comment area)
I followed the steps mentioned in the following link Stitching 2 images in opencv
- Finding SURF features in both images and match them
- Removing outliers with RANSAC
- Computing Homography
- Warping the target image to the reference image
and the picture below is the result I currently have:
Two images are taken using a camera at the same position but in different direction(I used a tripod).
Then I tried another test. This time I still take 2 images using the same camera. However, I moved my camera a bit from its original position and then I took the second picture. The result is rather terrible as shown:
Problem1:**Does it mean that **if the 2 cameras are at different positions, the standard panorama stitching technique (based on a homography or camera rotational model) won't work?
I tried to stitch images taken at different positions because in the future I would like to implement the stitching algorithm on 2 cameras in different positions so as to widen the FOV, sort of like this:(I'll post the picture in the comment, plz check Widen FOV)
but now it looks like I'm going the wrong way :(.
I just found out that during the algorithm, the feature finding and matching takes most of the time.
Problem 2: Can I just compute features in certain part(Overlap area) of the 2 images and still perform transformation using the Homography? i.e , NOT to compute the whole image.
I think in this way because I think it's not necessary to compute features in the whole image if I specifify the amount of the overlap area between 2 images. If I can just compute and match the features in the overlap area it should greatly increase the speed.
The first code shown below is the original code which computes features across the whole images.
int minHessian = 3000;
SurfFeatureDetector detector( minHessian );
vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( frm1, keypoints_1 );
detector.detect( frm2, keypoints_2 );
//-- Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor; ///
Mat descriptors_1, descriptors_2;
extractor.compute( frm1, keypoints_1, descriptors_1 );
extractor.compute( frm2, keypoints_2, descriptors_2 );
I did the following thing to try to reduce the time required to run the whole algorithm:
//detector.detect( frm1(Rect(0.5*frm1.cols,0,0.5*frm1.cols,frm1.rows)), keypoints_1 );
//detector.detect( frm2(Rect(0,0,0.6*frm2.cols,frm2.rows)), keypoints_2 );
//-- Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor; ///
Mat descriptors_1, descriptors_2;
extractor.compute( frm1(Rect(0.5*frm1.cols,0,0.5*frm1.cols,frm1.rows)), keypoints_1, descriptors_1 );
extractor.compute( frm2(Rect(0,0,0.6*frm2.cols,frm2.rows)), keypoints_2, descriptors_2 );
Using the code above, the computation time is significantly decreased while giving a bad result: (I'll post the picture in the comment, plz check Bad Result)
Currently stuck and have no idea what to do next. Really hope and aprreciate any help. Thanks.