1

I found very useful example from images stitching but my problem was those type of images here is an exemple First Image

and here is an other image Second Image

when i use opencv stitcher the reult imaages is getting smaller like this one Small Result

is there any method to apply a transform into the input images so they will be like this one enter image description here

here is the code

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include<opencv2/opencv.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/stitching/stitcher.hpp>
#include<vector>
using namespace cv;
using namespace std;
cv::vector<cv::Mat> ImagesList;
string result_name ="/TopViewsHorizantale/1.bmp";
int main()
{
      // Load the images

 Mat image1= imread("current_00000.bmp" );
 Mat image2= imread("current_00001.bmp" );
 cv::resize(image1, image1, image2.size());
 Mat gray_image1;
 Mat gray_image2;
 Mat Matrix = Mat(3,3,CV_32FC1);

 // Convert to Grayscale
 cvtColor( image1, gray_image1, CV_RGB2GRAY );
 cvtColor( image2, gray_image2, CV_RGB2GRAY );
 namedWindow("first image",WINDOW_AUTOSIZE);
 namedWindow("second image",WINDOW_AUTOSIZE);
 imshow("first image",image2);
 imshow("second image",image1);

if( !gray_image1.data || !gray_image2.data )
 { std::cout<< " --(!) Error reading images " << std::endl; return -1; }

//-- Step 1: Detect the keypoints using SURF Detector
 int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector< KeyPoint > keypoints_object, keypoints_scene;

detector.detect( gray_image1, keypoints_object );
detector.detect( gray_image2, keypoints_scene );

//-- Step 2: Calculate descriptors (feature vectors)
 SurfDescriptorExtractor extractor;

Mat descriptors_object, descriptors_scene;

extractor.compute( gray_image1, keypoints_object, descriptors_object );
extractor.compute( gray_image2, keypoints_scene, descriptors_scene );

//-- Step 3: Matching descriptor vectors using FLANN matcher
 FlannBasedMatcher matcher;
 std::vector< DMatch > matches;
 matcher.match( descriptors_object, descriptors_scene, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
 for( int i = 0; i < descriptors_object.rows; i++ )
 { double dist = matches[i].distance;
 if( dist < min_dist ) min_dist = dist;
 if( dist > max_dist ) max_dist = dist;
 }

printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );

//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
 std::vector< DMatch > good_matches;

for( int i = 0; i < descriptors_object.rows; i++ )
 { if( matches[i].distance < 3*min_dist )
 { good_matches.push_back( matches[i]); }
 }
 std::vector< Point2f > obj;
 std::vector< Point2f > scene;

for( int i = 0; i < good_matches.size(); i++ )
 {
 //-- Get the keypoints from the good matches
 obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
 scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
 }

// Find the Homography Matrix
 Mat H = findHomography( obj, scene, CV_RANSAC );
 // Use the Homography Matrix to warp the images
 cv::Mat result;
      int N = image1.rows + image2.rows;
 int M = image1.cols+image2.cols;
 warpPerspective(image1,result,H,cv::Size(N,M));
 cv::Mat half(result,cv::Rect(0,0,image2.rows,image2.cols));
 result.copyTo(half);
 namedWindow("Result",WINDOW_AUTOSIZE);
 imshow( "Result", result);

 imwrite(result_name, result);

 waitKey(0);
 return 0;
}

Also here is a link for some images :: https://www.dropbox.com/sh/ovzkqomxvzw8rww/AAB2DDCrCF6NlCFre7V1Gb6La?dl=0 Thank you so much Lafi

lafi raed
  • 113
  • 2
  • 13
  • Please explain your question in more details? How is this video recorded? Why is the camera moving diagonally etc. – saurabheights Oct 09 '16 at 09:49
  • the camera can rotate , and record video, it can be roatated at any angle, and we have to stitch all the images – lafi raed Oct 09 '16 at 11:47
  • the problem is when stitching those images the resul images is getting smaller and smaller – lafi raed Oct 09 '16 at 12:47
  • Do you have access to homography matrix between current frame and previous frame? Are you using a stream of images and stitching Stitched Frame and Current frame? Or are you feeding all images at once to OPencvStitcher? – saurabheights Oct 09 '16 at 13:20
  • i can get the Homography matrix by using FindHomography between the current frame and the newt frame. i have tested the both the OPencv Sticher for all farmes stored into a vector , and i have test it using matching with surf between the current frame and the next frame then between the result of both previous frame and the new next frame. But the rsult image is geeting smaller when i use matching then wrap perspective, and when using OpencvSticher same detail was hiding – lafi raed Oct 09 '16 at 13:38
  • There are 2 things, I can think of. 1 - Highly unlikely but Opencv might be reducing the size to do stitching faster, there should be a way to override it. 2. Opencv is not making them smaller, but the pixels from previous and next frame are stitched together and writing over a very large black base image(if original image is 640x480, output image might be 3200x2400). – saurabheights Oct 09 '16 at 13:46
  • Also: If you can get the transformation matrix(without much extra computation), you will need to keep track of the transformation matrix to reach from first frame to last frame(inverse of product of multiple transformation matrix between each t and t+1 frame). Multiple the transformation matrix with four corners of Image dimension({0,0}, {0,640}, {640,0}, {640,480}). Now you can get ROI. If this doesn't solves the problem, please provide a video and some sample code to test on. – saurabheights Oct 09 '16 at 13:55
  • keep track of the transformation matrix you H01 for the frame0 and farmeone then H12for frame one and frame 2 then H is H01 * H12 then i multiply H by the four corner . and then what i will do.?? – lafi raed Oct 09 '16 at 13:55
  • Yes - Note that its H10 and H21, to transform Frame 1 to Frame 0 and Frame 2 to Frame 1 respectively. – saurabheights Oct 09 '16 at 13:58
  • okay, then what wha so ou mean i will get the Roi??? please can you ewplain more – lafi raed Oct 09 '16 at 14:04
  • Please post a small video and a code to reproduce this problem. That will be easier for both of us. Second, ROI - Region of Interest. If you know where to crop in the third image posted above, you can display the ROI only. – saurabheights Oct 09 '16 at 14:10
  • i will post the code – lafi raed Oct 09 '16 at 14:26

1 Answers1

1

Problem: Output Image is too large.

Original Code:-

int N = image1.rows + image2.rows;
int M = image1.cols+image2.cols;
warpPerspective(image1,result,H,cv::Size(N,M)); // Too big size.
cv::Mat half(result,cv::Rect(0,0,image2.rows,image2.cols));
result.copyTo(half);
namedWindow("Result",WINDOW_AUTOSIZE);
imshow( "Result", result);

The result image generated is storing as many rows as in image1 and in image2. However, the output image should be equal to dimension of image1 and image2 - dimension of overlapping area.

Another Problem Why are you warping image1. Compute H'(inverse matrix of H) and warp image2 using H'. You should be registering image2 onto image1.

Also, study how warpPerspective works. It finds the area ROI to which the image2 will be warped. Next for each pixel in this ROI area of result(say x,y), it finds the corresponding location say (x',y') in the image2. Note: (x', y') can be real values, like (4.5, 5.4).

Some form of interpolation(probably linear interpolation) is used to find the pixel value for (x, y) in image result.

Next, how to find the size of result matrix. Don't use N,M. Use matrix H' and warp image corners to find where they will end

For transformation matrix, see this wiki and http://planning.cs.uiuc.edu/node99.html. Know the difference between rotation, translational, affine and perspective transformation matrix. Then read the opencv docs here.

You can also read on an earlier answer by me. This answer shows simple algebra to find a crop area. You need to adjust the code for the four corners of both images. Note, image pixels of the new image can go to a negative pixel location as well.

Sample Code(In java):-

import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;

import org.opencv.calib3d.Calib3d;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.DMatch;
import org.opencv.core.KeyPoint;
import org.opencv.core.Mat;
import org.opencv.core.MatOfDMatch;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.features2d.DescriptorExtractor;
import org.opencv.features2d.DescriptorMatcher;
import org.opencv.features2d.FeatureDetector;
import org.opencv.features2d.Features2d;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;

public class Driver {

    public static void stitchImages() {
        // Read as grayscale
        Mat grayImage1 = Imgcodecs.imread("current_00000.bmp", 0);
        Mat grayImage2 = Imgcodecs.imread("current_00001.bmp", 0);

        if (grayImage1.dataAddr() == 0 || grayImage2.dataAddr() == 0) {
            System.out.println("Images read unsuccessful.");
            return;
        }

        // Create transformation matrix
        Mat transformMatrix = new Mat(3, 3, CvType.CV_32FC1);

        // -- Step 1: Detect the keypoints using AKAZE Detector
        int minHessian = 400;
        MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
        MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
        FeatureDetector surf = FeatureDetector.create(FeatureDetector.AKAZE);
        surf.detect(grayImage1, keypoints1);
        surf.detect(grayImage2, keypoints2);

        // -- Step 2: Calculate descriptors (feature vectors)
        DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.AKAZE);
        Mat descriptors1 = new Mat();
        Mat descriptors2 = new Mat();
        extractor.compute(grayImage1, keypoints1, descriptors1);
        extractor.compute(grayImage2, keypoints2, descriptors2);

        // -- Step 3: Match the keypoints
        DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
        MatOfDMatch matches = new MatOfDMatch();
        matcher.match(descriptors1, descriptors2, matches);
        List<DMatch> myList = new LinkedList<>(matches.toList());

        // Filter good matches
        double min_dist = Double.MAX_VALUE;
        Iterator<DMatch> itr = myList.iterator();
        while (itr.hasNext()) {
            DMatch element = itr.next();
            min_dist = Math.min(element.distance, min_dist);
        }

        LinkedList<Point> img1GoodPointsList = new LinkedList<Point>();
        LinkedList<Point> img2GoodPointsList = new LinkedList<Point>();

        List<KeyPoint> keypoints1List = keypoints1.toList();
        List<KeyPoint> keypoints2List = keypoints2.toList();

        itr = myList.iterator();
        while (itr.hasNext()) {
            DMatch dMatch = itr.next();
            if (dMatch.distance >= 5 * min_dist) {
                img1GoodPointsList.addLast(keypoints1List.get(dMatch.queryIdx).pt);
                img2GoodPointsList.addLast(keypoints2List.get(dMatch.trainIdx).pt);
            } else {
                itr.remove();
            }
        }

        matches.fromList(myList);
        Mat outputMid = new Mat();
        System.out.println("best matches size: " + matches.size());
        Features2d.drawMatches(grayImage1, keypoints1, grayImage2, keypoints2, matches, outputMid);
        Imgcodecs.imwrite("outputMid - A - A.jpg", outputMid);

        MatOfPoint2f img1Locations = new MatOfPoint2f();
        img1Locations.fromList(img1GoodPointsList);

        MatOfPoint2f img2Locations = new MatOfPoint2f();
        img2Locations.fromList(img2GoodPointsList);

        // Find the Homography Matrix - Note img2Locations is give first to get
        // inverse directly.
        Mat hg = Calib3d.findHomography(img2Locations, img1Locations, Calib3d.RANSAC, 3);
        System.out.println("hg is: " + hg.dump());

        // Find the location of two corners to which Image2 will warp.
        Size img1Size = grayImage1.size();
        Size img2Size = grayImage2.size();
        System.out.println("Sizes are: " + img1Size + ", " + img2Size);

        // Store location x,y,z for 4 corners
        Mat img2Corners = new Mat(3, 4, CvType.CV_64FC1, new Scalar(0));
        Mat img2CornersWarped = new Mat(3, 4, CvType.CV_64FC1);

        img2Corners.put(0, 0, 0, img2Size.width, 0, img2Size.width);   // x
        img2Corners.put(1, 0, 0, 0, img2Size.height, img2Size.height); // y
        img2Corners.put(2, 0, 1, 1, 1, 1); // z - all 1

        System.out.println("Homography is \n" + hg.dump());
        System.out.println("Corners matrix is \n" + img2Corners.dump());
        Core.gemm(hg, img2Corners, 1, new Mat(), 0, img2CornersWarped);
        System.out.println("img2CornersWarped: " + img2CornersWarped.dump());

        // Find the new size to use
        int minX = 0, minY = 0; // The grayscale1 already has minimum location at 0
        int maxX = 1500, maxY = 1500; // The grayscale1 already has maximum location at 1500(possible 1499, but 1 pixel wont effect)
        double[] xCoordinates = new double[4];
        img2CornersWarped.get(0, 0, xCoordinates);
        double[] yCoordinates = new double[4];
        img2CornersWarped.get(1, 0, yCoordinates);
        for (int c = 0; c < 4; c++) {
            minX = Math.min((int)xCoordinates[c], minX);
            maxX = Math.max((int)xCoordinates[c], maxX);
            minY = Math.min((int)xCoordinates[c], minY);
            maxY = Math.max((int)xCoordinates[c], maxY);
        }
        int rows = (maxX - minX + 1);
        int cols = (maxY - minY + 1);

        // Warp to product final output
        Mat output1 = new Mat(new Size(cols, rows), CvType.CV_8U, new Scalar(0));
        Mat output2 = new Mat(new Size(cols, rows), CvType.CV_8U, new Scalar(0));
        Imgproc.warpPerspective(grayImage1, output1, Mat.eye(new Size(3, 3), CvType.CV_32F), new Size(cols, rows));
        Imgproc.warpPerspective(grayImage2, output2, hg, new Size(cols, rows));
        Mat output = new Mat(new Size(cols, rows), CvType.CV_8U);
        Core.addWeighted(output1, 0.5, output2, 0.5, 0, output);
        Imgcodecs.imwrite("output.jpg", output);
    }

    public static void main(String[] args) {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
        stitchImages();
    }
}

Change Descriptor

Move to Akaze from Surf. I have seen perfect image registration just from this.

Output Image

This output uses less space and change of descriptor shows perfect registration.

OutputImage

P.S.: IMHO, coding is awesome, but the real treasure is the fundamental knowledge/concepts.

Community
  • 1
  • 1
saurabheights
  • 3,967
  • 2
  • 31
  • 50
  • I cannot right away. Currently on mac and Xcode is giving some unknown error. Also, you haven't provided a video, just a few frames. Also, your code uses current_00000.bmp, which is not in the GoogleDrive folder you provided. – saurabheights Oct 09 '16 at 18:33
  • here is the other drive link https://www.dropbox.com/sh/270yb3heeqjv8sk/AADqtRceyjKmwkY1uSP5Iu5ka?dl=0 i jus have frames, i haven't the original video – lafi raed Oct 09 '16 at 21:00
  • @lafiraed: I wont be able to provide code quickly. The whole ccode is giving bunch of errors on Mac OSX 10.12. – saurabheights Oct 09 '16 at 21:29
  • okay but please adjust the path to your images it' s not the same as mine. Thank you – lafi raed Oct 09 '16 at 21:34
  • @lafiraed: There is one problem. Can you warp image1 to output1, and copy image2 to output2, then use addWeighted(output1, 0.5, output2, 0.5, 0, output) to see if warped image1 fits perfectly over image2. – saurabheights Oct 11 '16 at 21:26
  • so i wrap image 1 to output using H or H'?? – lafi raed Oct 11 '16 at 22:06
  • also output1 and output2 just should be a mat, right?? – lafi raed Oct 11 '16 at 22:08
  • here is the result the output image output https://drive.google.com/file/d/0B1D_FX2T2QR7RTl0MmlLOU1abzQ/view?usp=sharing – lafi raed Oct 11 '16 at 22:12
  • AND here is thecode Mat H =findHomography( obj, scene, CV_RANSAC ); invert(H,h); // Use the Homography Matrix to warp the images cv::Mat result,output1,output2,output; warpPerspective(image1,output1,H,cv::Size(1500,1500)); image2.copyTo(output2); addWeighted(output1, 0.5, output2, 0.5, 0, output); – lafi raed Oct 11 '16 at 22:13
  • Your registered image looks good. Since, I dont have Surf(patented) in opencv 3, had to move to Orb, which didnt give good image registration result. – saurabheights Oct 11 '16 at 23:13
  • @lafiraed : Check the updated answer. First move from Surf to Akaze and see the improvement from the image you uploaded few hours ago. Then make changes accordingly for reducing the output image size. Also, opencv & C++ had a fight on my mac, so I talked to java who is a good friend of opencv and asked him to get this problem solved. Thus sample code is in java. Also, check the wiki link I added from edit history of the answer. – saurabheights Oct 12 '16 at 00:27
  • i'm converting it to c++, i will let you know the result – lafi raed Oct 12 '16 at 15:27
  • Might be worth/interesting looking into other implementations of SURF (like Pan-o-Matic if you like C++ or BoofCV in Java) and comparing them to AKAZE. I did a performance study of several implementations a bit ago. OpenCV was about 18% worse than the reference for descriptor stability. Filed a bug report back then too. [Website](http://boofcv.org/index.php?title=Performance:SURF). – lessthanoptimal Jan 15 '17 at 18:42
  • @PeterAbeles: Thank you for the informative link. Please, can you provide more information on the filled bug (I think you meant to say a filled bug on OpenCV)? Its link should suffice. Again, my sincere gratitude. – saurabheights Jan 15 '17 at 19:34
  • @saurabheights submitted a bug report would be a clearer way to say that. Tried to find the bug report itself but instead found a conversation I had with an OpenCV maintainer. One problem I found was that it was flooring instead of rounding but had trouble tracking down other issues quickly. I also see that I never submitted a patch like I said I would for at least the one issue I found :p – lessthanoptimal Jan 16 '17 at 02:26
  • Found the [bug report](http://code.opencv.org/issues/2640). It's for a bug that was introduced in 2.4 which made the performance worse. I might have used the 2.3 results which were better under the assumption that someone would fix it. – lessthanoptimal Jan 16 '17 at 02:30
  • @PeterAbeles: Surf/SIFT are one of my top favorite techniques, and your work is a real treasure for me. Thank you so much. It will take me a few days to go through your paper, OpenCV bug fixes, but this will be a good learning. :) – saurabheights Jan 16 '17 at 09:29