1

I am using opencv and want to stick with it.

I have 5 images with some common areas in a pairwise manner. I want to merge them together in a single image. I have been successful joining two images together, as they were of the same resolution(a little tweak brought them to the same resolution without distorting the contents significantly). But now this first stage of merging gives me a highly inflated image, the resolution has gone significantly up(kind of an addition of two images). To merge the two images I had brought their resolutions to the same value and it didn't cause much distortion. But now there's this image with double the length. If I change its resolution to the level of the image next in line for stitching, it is going to highly distort the content of the first stage and hence the result from here on. How do I fix this issue given that I need to go through 5-6 iterations of stitching where the resolution is going to keep increasing? Also, if there is any text which goes into details of image processing with examples, like above.

Stitcher.py

# -*- coding: utf-8 -*-
"""
Spyder Editor

This is a temporary script file.
"""
# import the necessary packages
import numpy as np
import imutils
import cv2

class Stitcher:
      def __init__(self):
        # determine if we are using OpenCV v3.X
        self.isv3 = imutils.is_cv3()

      def stitch(self, images, ratio=0.75, reprojThresh=4.0,
        showMatches=False):
        # unpack the images, then detect keypoints and extract
        # local invariant descriptors from them
        (imageB, imageA) = images


        #(b, g, r) = cv2.split(imageA)
        #imageA = cv2.merge([r,g,b])
        #(b, g, r) = cv2.split(imageB)
        #imageB = cv2.merge([r,g,b])

        (kpsA, featuresA) = self.detectAndDescribe(imageA)
        (kpsB, featuresB) = self.detectAndDescribe(imageB)

        # match features between the two images
        M = self.matchKeypoints(kpsA, kpsB,
            featuresA, featuresB, ratio, reprojThresh)

        # if the match is None, then there aren't enough matched
        # keypoints to create a panorama
        if M is None:
            return None
           # otherwise, apply a perspective warp to stitch the images
        # together
        (matches, H, status) = M
        result = cv2.warpPerspective(imageA, H,
            (imageA.size[1] + imageB.size[1], imageA.size[0]))

        result[0:imageB.size[0], 0:imageB.size[1]] = imageB

        # check to see if the keypoint matches should be visualized
        if showMatches:
            vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches,
                status)

            # return a tuple of the stitched image and the
            # visualization
            return (result, vis)

        # return the stitched image
        return result
      def detectAndDescribe(self, image):
        # convert the image to grayscale
        gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

        # check to see if we are using OpenCV 3.X
        if self.isv3:
            # detect and extract features from the image
            descriptor = cv2.xfeatures2d.SIFT_create()
            (kps, features) = descriptor.detectAndCompute(image, None)

        # otherwise, we are using OpenCV 2.4.X
        else:
            # detect keypoints in the image
            detector = cv2.FeatureDetector_create("SIFT")
            kps = detector.detect(gray)

            # extract features from the image
            extractor = cv2.DescriptorExtractor_create("SIFT")
            (kps, features) = extractor.compute(gray, kps)

        # convert the keypoints from KeyPoint objects to NumPy
        # arrays
        kps = np.float32([kp.pt for kp in kps])

        # return a tuple of keypoints and features
        return (kps, features)
      def matchKeypoints(self, kpsA, kpsB, featuresA, featuresB,
        ratio, reprojThresh):
        # compute the raw matches and initialize the list of actual
        # matches
        matcher = cv2.DescriptorMatcher_create("BruteForce")
        rawMatches = matcher.knnMatch(featuresA, featuresB, 2)
        matches = []

        # loop over the raw matches
        for m in rawMatches:
            # ensure the distance is within a certain ratio of each
            # other (i.e. Lowe's ratio test)
            if len(m) == 2 and m[0].distance < m[1].distance * ratio:
                matches.append((m[0].trainIdx, m[0].queryIdx))
        # computing a homography requires at least 4 matches
        if len(matches) > 4:
            # construct the two sets of points
            ptsA = np.float32([kpsA[i] for (_, i) in matches])
            ptsB = np.float32([kpsB[i] for (i, _) in matches])

            # compute the homography between the two sets of points
            (H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC,
                reprojThresh)

            # return the matches along with the homograpy matrix
            # and status of each matched point
            return (matches, H, status)

        # otherwise, no homograpy could be computed
        return None
      def drawMatches(self, imageA, imageB, kpsA, kpsB, matches, status):
        # initialize the output visualization image
        (hA, wA) = imageA.shape[:2]
        (hB, wB) = imageB.shape[:2]
        vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8")
        vis[0:hA, 0:wA] = imageA
        vis[0:hB, wA:] = imageB

        # loop over the matches
        for ((trainIdx, queryIdx), s) in zip(matches, status):
            # only process the match if the keypoint was successfully
            # matched
            if s == 1:
                # draw the match
                ptA = (int(kpsA[queryIdx][0]), int(kpsA[queryIdx][1]))
                ptB = (int(kpsB[trainIdx][0]) + wA, int(kpsB[trainIdx][1]))
                cv2.line(vis, ptA, ptB, (0, 255, 0), 1)

        # return the visualization
        return vis  

run.py

# -*- coding: utf-8 -*-
"""
Created on Mon Dec 18 11:13:23 2017

@author: user
"""
# import the necessary packages
import os
os.chdir('/home/user/Desktop/stitcher')


from str import  Stitcher
import argparse
import imutils
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-f", "--first", required=True,
    help="path to the first image")
ap.add_argument("-s", "--second", required=True,
    help="path to the second image")
args = vars(ap.parse_args())
# load the two images and resize them to have a width of 400 pixels
# (for faster processing)

#from PIL import Image
#imageA = Image.open(args['first']).convert('RGB')
#imageB = Image.open(args['second']).convert('RGB')

imageA = cv2.imread(args["first"])
imageB = cv2.imread(args["second"])

#imageA = imutils.resize(imageA, width=400)
#imageB = imutils.resize(imageB, width=400)

imageA = cv2.resize(imageA,(2464,832)) #hardcoded values
imageB = cv2.resize(imageB,(2464,832)) #hardcoded values

# stitch the images together to create a panorama
stitcher = Stitcher()
(result, vis) = stitcher.stitch([imageA, imageB], showMatches=True)
cv2.imwrite('stage1.png',result) 


# show the images
cv2.imshow("Image A", imageA)
cv2.imshow("Image B", imageB)
cv2.imshow("Keypoint Matches", vis)
cv2.imshow("Result", result)
cv2.waitKey(0)

As you can see, I have resized the images so that they have the same height and width with hardcoded values. I could have just got the minimum of two and put that as their length and breadth.

When I bring in the third image, I can't inflate it to match the resolution of stage1 or neither can I decrease the stage1's resolution to match the third image.

P.S. : imgutils didn't give me a way to choose both length and breadth.

piepi
  • 141
  • 1
  • 4
  • 12
  • What about https://docs.opencv.org/2.4/modules/stitching/doc/high_level.html ? – Dmitrii Z. Dec 18 '17 at 22:06
  • I am not really versed with c++. Is there any equivalent python reference for stitching in opencv? – piepi Dec 18 '17 at 22:51
  • I mean, if there was a way to put "opencv python stitching" into Google... – Dmitrii Z. Dec 18 '17 at 22:54
  • I did google around and most of references are for c++. https://www.pyimagesearch.com/2016/01/11/opencv-panorama-stitching/ is what I get and this is where I started from. The above two scripts are from that page. It works for two images. Fails on the next iteration. I can make the script work for even number of images, if I come to think of it, but yet the odd number of images are going to be an issue. – piepi Dec 18 '17 at 22:56
  • I don't know man, i've put 3 words into google and that is second link https://stackoverflow.com/questions/34362922/how-to-use-opencv-stitcher-class-with-python – Dmitrii Z. Dec 18 '17 at 23:30
  • I don't understand what you mean by an "inflated" image. Can you post what your output looks like? What I imagine is happening is that you're taking images that are not heads on/planar but that are shifted with some perspective distortion, and this means the matched image will need to scale to fit into the scene. If that's correct, then what exactly is the output you expect? See for e.g. [here](https://stackoverflow.com/q/45162021/5087436) and [here](https://stackoverflow.com/q/45315541/5087436). – alkasm Dec 19 '17 at 05:10
  • By 'Inflated' I mean that each iteration will have an output of combined resolution of the images. However individual images are always of the same resolution. It becomes an issue when you append an individual third image to a combined image in each iteration, form second iteration onwards to be precise. – piepi Dec 19 '17 at 05:19

0 Answers0