0

Output of code[![][1]]2

So as shown in the image below, i have key points detected on the image but the output image after wrap perspective neglects the first image on the left side, cannot figure out why !

    import numpy as np
    import imutils
    import cv2

class Stitcher:
def __init__(self):
    # determine if we are using OpenCV v3.X
    self.isv3 = imutils.is_cv3()

def stitch(self, imageA,imageB, ratio=0.75, reprojThresh=10.0,
    showMatches=False):
    # unpack the images, then detect keypoints and extract
    # local invariant descriptors from them
    #(imageB, imageA) = images
    (kpsA, featuresA) = self.detectAndDescribe(imageA)
    (kpsB, featuresB) = self.detectAndDescribe(imageB)

    # match features between the two images
    M = self.matchKeypoints(kpsA, kpsB,
        featuresA, featuresB, ratio, reprojThresh)

    # if the match is None, then there aren't enough matched
    # keypoints to create a panorama
    if M is None:
        return None

    # otherwise, apply a perspective warp to stitch the images
    # together
    (matches, H, status) = M
    #print(M)
    #print(matches)
    #print(H)
    #print(status)
    #cv2.imwrite('intermediate.jpg',matches)
    result = cv2.warpPerspective(imageA, H,
        (imageA.shape[1] + imageB.shape[1], imageA.shape[0]))
    result[0:imageB.shape[0], 0:imageB.shape[1]] = imageB
    #cv2.imshow('intermediate',result)

    # check to see if the keypoint matches should be visualized
    if showMatches:
        vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches,
            status)

        # return a tuple of the stitched image and the
        # visualization
        return (result, vis)

    # return the stitched image
    return result

def detectAndDescribe(self, image):
    # convert the image to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

    # check to see if we are using OpenCV 3.X
    if self.isv3:
        # detect and extract features from the image
        #SIFT Algorithm
        descriptor = cv2.xfeatures2d.SIFT_create()
        #SURF Algorithm
        #descriptor = cv2.xfeatures2d.SURF_create()# 400 is hesian threshold, optimum values should be around 300-500
        #upright SURF: faster and can be used for panorama stiching i.e our case.
        #descriptor.upright = True
        print(descriptor.descriptorSize())
        (kps, features) = descriptor.detectAndCompute(image, None)
        print(len(kps),features.shape)

    # otherwise, we are using OpenCV 2.4.X
    else:
        # detect keypoints in the image
        detector = cv2.FeatureDetector_create("SIFT")
        kps = detector.detect(gray)

        # extract features from the image
        extractor = cv2.DescriptorExtractor_create("SIFT")
        (kps, features) = extractor.compute(gray, kps)

    # convert the keypoints from KeyPoint objects to NumPy
    # arrays
    kps = np.float32([kp.pt for kp in kps])

    # return a tuple of keypoints and features
    #print("features",features)
    return (kps, features)

def matchKeypoints(self, kpsA, kpsB, featuresA, featuresB,
    ratio, reprojThresh):
    # compute the raw matches and initialize the list of actual
    # matches
    matcher = cv2.DescriptorMatcher_create("BruteForce")
    rawMatches = matcher.knnMatch(featuresA, featuresB, 2)
    matches = []

    # loop over the raw matches
    for m in rawMatches:
        # ensure the distance is within a certain ratio of each
        # other (i.e. Lowe's ratio test)
        if len(m) == 2 and m[0].distance < m[1].distance * ratio:
            matches.append((m[0].trainIdx, m[0].queryIdx))
    print(len(matches))

    # computing a homography requires at least 4 matches
    if len(matches) > 4:
        # construct the two sets of points
        ptsA = np.float32([kpsA[i] for (_, i) in matches])
        ptsB = np.float32([kpsB[i] for (i, _) in matches])

        # compute the homography between the two sets of points
        (H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC,
            reprojThresh)

        # return the matches along with the homograpy matrix
        # and status of each matched point
        return (matches, H, status)

    # otherwise, no homograpy could be computed
    return None

def drawMatches(self, imageA, imageB, kpsA, kpsB, matches, status):
    # initialize the output visualization image
    (hA, wA) = imageA.shape[:2]
    (hB, wB) = imageB.shape[:2]
    vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8")
    vis[0:hA, 0:wA] = imageA
    vis[0:hB, wA:] = imageB

    # loop over the matches
    for ((trainIdx, queryIdx), s) in zip(matches, status):
        # only process the match if the keypoint was successfully
        # matched
        if s == 1:
            # draw the match
            ptA = (int(kpsA[queryIdx][0]), int(kpsA[queryIdx][1]))
            ptB = (int(kpsB[trainIdx][0]) + wA, int(kpsB[trainIdx][1]))
            cv2.line(vis, ptA, ptB, (0, 255, 0), 1)

    # return the visualization
    return vis

Above is the code used for key point detection and stitching,

One more question if someone can help me with vertical image stitching other than rotating images and performing horizontal stitching.

Thanks a lot !

enter image description here

I changed my code and used @Alexander's padtransf.warpPerspectivePadded function, to perform wrapping and blending ! Can you help me with getting the lighting uniform for the output image?

Anand Sonawane
  • 146
  • 1
  • 16
  • This is a little confusing since the images are vertical. Are you trying to stitch these vertically together? Or can you rotate them and stitch them horizontally? – alkasm Sep 12 '17 at 12:54
  • In the code i am trying to stitch them horizontally together ! What i did was first rotated them and now i am trying the horizontal approach – Anand Sonawane Sep 12 '17 at 13:57
  • Are you getting only the imageB ? with a black on the right side ? – I.Newton Sep 12 '17 at 14:01
  • @AlexanderReynolds Can you please take a look at the question, i edited it a bit – Anand Sonawane Sep 13 '17 at 11:00
  • @AnandSonawane I would suggest creating a new question for that and editing your post again to keep it down to the original question. Stack Overflow is really well suited for a single-question-and-answer format. But just to give some insight, you'll note my function warps images into blank/black images. You may want to either edit the functions to warp onto existing images, *or*, create a mask and use that to layer the images. – alkasm Sep 13 '17 at 20:30

1 Answers1

3

I had this issue myself. If I am not mistaken you are using this blog as reference.

The issue is the warpPerspective in regards to the line:

result = cv2.warpPerspective(imageA, H,
        (imageA.shape[1] + imageB.shape[1], imageA.shape[0]))
    result[0:imageB.shape[0], 0:imageB.shape[1]] = imageB

This method is omnidirectional. What I mean by this is that you are simply stitching imageA over imageB by replacing the pixel values based on the width and height as represented by .shape[0] and .shape[1]. I solved this in C++ and therefor don't have python code to show but can give you a run down of what must be done.

  1. Get four corners of each of the images you are using.
  2. Get the min and max corners for each image found in step 1.
  3. Create a Mat "HTR" to be used to map image one to result in line with the already warped image two. the HTR.at(0,2) represnts a location in the mats 3x3 matrix. Numpy is probably what you need to use here.
Mat Htr = Mat::eye(3,3,CV_64F);
    if (min_x < 0){
        max_x = image2.size().width - min_x;
        Htr.at<double>(0,2)= -min_x;
    }
    if (min_y < 0){
        max_y = image2.size().height - min_y;
        Htr.at<double>(1,2)= -min_y;
    }
  1. Perform a perspective transform on the four corners of each image to see where they will end up in space.
perspectiveTransform(vector<Point2f> fourPointImage1, vector<Point2f> image1dst, Htr*homography);
perspectiveTransform(vector<Point2f> fourPointImage2, vector<Point2f> image2dst, Htr);
  1. Get min and max values from image1dst four corners and iamge2dst four corners.
  2. Get the min and max of image1dst and iamge2dst and use to create a new blank image of the correct size to hold the final stitched images.
  3. Repeat step 3 process this time to determine the translation needed to adjust the four corners of each image to make sure the are moved into the virtual space of the blank image
  4. Finally throw in the actual images with all the homographies you have found/made.
warpPerspective(image1, blankImage, (translation*homography),result.size(), INTER_LINEAR,BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_CONSTANT,   (0));

The end goal and result is to determine where the images will be warped to so you can make a blank image to hold the entirety of the stitched images so nothing is cropped out. Only once you have done all the pre-processing do you actually stitch the images together. I hope this helps and if you have questions just holler.

Community
  • 1
  • 1
C.Radford
  • 882
  • 4
  • 13
  • 31
  • +1. For a deep dive with the related Python code, see my answer [here](https://stackoverflow.com/a/44459869/5087436). If you're interested, I created [Python](https://github.com/alkasm/padded-transformations) and [C++](https://github.com/alkasm/padded-transformations-cpp) modules for these padded functions. Also someone trying to create panoramas once asked a series of related questions that you might be interested in for the next steps: [1](https://stackoverflow.com/q/45162021/5087436), [2](https://stackoverflow.com/q/45315541/5087436), [3](https://stackoverflow.com/q/45453306/5087436). – alkasm Sep 12 '17 at 19:09
  • I thought your name looked familiar! Cool to see you go from asking a similar question a few months ago to providing a great answer! By the way, I did end up writing the C++ module out for this function. I know you did the same, I'd be interested if you took a look at yours and mine for comparison and if you have any suggestions, leave them at GitHub! – alkasm Sep 12 '17 at 19:21
  • 1
    @AlexanderReynolds Its a great feeling to be able on the other end of stack and answer questions for the community. I will take a look at your module as soon as I can. – C.Radford Sep 14 '17 at 13:28