0

I'm doing a "Circle View"(Bird View) system for a long truck. Due to the fact that the car is long on the sides of one camera is not enough. I decided to try to put two cameras and sew them, but there is a drawback. the code that I use stitches the image is not quite even and the joint is visible. How can I change the code to make the joint was less visible and sewn better? A chessboard with a size of 4x6. It stands in the middle of two video cameras. Maybe there's a way to stitch on a checkerboard? Here's my stitching result: enter image description here

Here are two images to be stitched together:

1 image: enter image description here

2 image: enter image description here

The code I have now:

import cv2 as cv
import numpy as np


def FindHomography(Matches, BaseImage_kp, SecImage_kp):
    # If less than 4 matches found, exit the code.
    if len(Matches) < 4:
        print("\nNot enough matches found between the images.\n")
        exit(0)

    # Storing coordinates of points corresponding to the matches found in both the images
    BaseImage_pts = []
    SecImage_pts = []
    for Match in Matches:
        BaseImage_pts.append(BaseImage_kp[Match[0].queryIdx].pt)
        SecImage_pts.append(SecImage_kp[Match[0].trainIdx].pt)

    # Changing the datatype to "float32" for finding homography
    BaseImage_pts = np.float32(BaseImage_pts)
    SecImage_pts = np.float32(SecImage_pts)

    # Finding the homography matrix(transformation matrix).
    (HomographyMatrix, Status) = cv.findHomography(SecImage_pts, BaseImage_pts, cv.RANSAC, 4.0)

    return HomographyMatrix, Status


def GetNewFrameSizeAndMatrix(HomographyMatrix, Sec_ImageShape, Base_ImageShape):
    # Reading the size of the image
    (Height, Width) = Sec_ImageShape

    # Taking the matrix of initial coordinates of the corners of the secondary image
    # Stored in the following format: [[x1, x2, x3, x4], [y1, y2, y3, y4], [1, 1, 1, 1]]
    # Where (xi, yi) is the coordinate of the i th corner of the image.
    InitialMatrix = np.array([[0, Width - 1, Width - 1, 0],
                              [0, 0, Height - 1, Height - 1],
                              [1, 1, 1, 1]])

    # Finding the final coordinates of the corners of the image after transformation.
    # NOTE: Here, the coordinates of the corners of the frame may go out of the
    # frame(negative values). We will correct this afterwards by updating the
    # homography matrix accordingly.
    FinalMatrix = np.dot(HomographyMatrix, InitialMatrix)

    [x, y, c] = FinalMatrix
    x = np.divide(x, c)
    y = np.divide(y, c)

    # Finding the dimentions of the stitched image frame and the "Correction" factor
    min_x, max_x = int(round(min(x))), int(round(max(x)))
    min_y, max_y = int(round(min(y))), int(round(max(y)))

    New_Width = max_x
    New_Height = max_y
    Correction = [0, 0]
    if min_x < 0:
        New_Width -= min_x
        Correction[0] = abs(min_x)
    if min_y < 0:
        New_Height -= min_y
        Correction[1] = abs(min_y)

    # Again correcting New_Width and New_Height
    # Helpful when secondary image is overlaped on the left hand side of the Base image.
    if New_Width < Base_ImageShape[1] + Correction[0]:
        New_Width = Base_ImageShape[1] + Correction[0]
    if New_Height < Base_ImageShape[0] + Correction[1]:
        New_Height = Base_ImageShape[0] + Correction[1]

    # Finding the coordinates of the corners of the image if they all were within the frame.
    x = np.add(x, Correction[0])
    y = np.add(y, Correction[1])
    OldInitialPoints = np.float32([[0, 0],
                                   [Width - 1, 0],
                                   [Width - 1, Height - 1],
                                   [0, Height - 1]])
    NewFinalPonts = np.float32(np.array([x, y]).transpose())

    # Updating the homography matrix. Done so that now the secondary image completely
    # lies inside the frame
    HomographyMatrix = cv.getPerspectiveTransform(OldInitialPoints, NewFinalPonts)

    return [New_Height, New_Width], Correction, HomographyMatrix


ratio_thresh = 0.9

image1 = cv.imread(filename='/home/msi-user/PycharmProjects/170Camera/1_camera.jpg')

image2 = cv.imread(filename='/home/msi-user/PycharmProjects/170Camera/2_camera.jpg')



# -----------------------------------------KAZE--------------------------------#

AKAZE = cv.KAZE_create()  # KAZE, AKAZE, ORB, BRISK, xfeatures2d.SURF

keypoints1, descriptors1 = AKAZE.detectAndCompute(image1, None)
keypoints2, descriptors2 = AKAZE.detectAndCompute(image2, None)

FLANN_INDEX_KDTREE = 1

index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)

descriptors1 = np.float32(descriptors1)
descriptors2 = np.float32(descriptors2)

FLANN = cv.FlannBasedMatcher(indexParams=index_params,
                             searchParams=search_params)

matches = FLANN.knnMatch(queryDescriptors=descriptors1,
                         trainDescriptors=descriptors2,
                         k=2)

good_matches = []
t = []
for m, n in matches:
    if m.distance < ratio_thresh * n.distance:
        good_matches.append([m])
        t.append(m)

output = cv.drawMatches(img1=image1,
                        keypoints1=keypoints1,
                        img2=image2,
                        keypoints2=keypoints2,
                        matches1to2=t,
                        outImg=None,
                        flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

cv.namedWindow("drawMatches.jpg", cv.WINDOW_NORMAL)
cv.imshow("drawMatches.jpg", output)

# ----------------------------------FindHomography-------------------------------------------#

HomographyMatrix, Status = FindHomography(good_matches, keypoints1, keypoints2)

BaseImage = image1
SecImage = image2

NewFrameSize, Correction, HomographyMatrix = GetNewFrameSizeAndMatrix(HomographyMatrix, SecImage.shape[:2],
                                                                      BaseImage.shape[:2])

StitchedImage = cv.warpPerspective(SecImage, HomographyMatrix, (NewFrameSize[1], NewFrameSize[0]))
StitchedImage[Correction[1]:Correction[1] + BaseImage.shape[0],
Correction[0]:Correction[0] + BaseImage.shape[1]] = BaseImage

cv.namedWindow("stisched2.jpg", cv.WINDOW_NORMAL)
cv.imshow("stisched2.jpg", StitchedImage)


cv.imwrite("result.jpg", StitchedImage)
while True:
    if cv.waitKey(1) == 27:
        break
gfd2
  • 101
  • 3
  • Perspective homography is only correct for a planar scene. Currently your are stitching on the checkerboard plane. Place the checkerboard on your target plane (dame orientation as the target plane) and you should see a better result. – Micka Oct 20 '22 at 11:06
  • @Micka Do you mean turn the board 90 degrees or raise the board? – gfd2 Oct 20 '22 at 11:09
  • can you show a picture or a drawing of the scene? I dont know how your scene looks like, but from the perspective lines in your scene it looks like you either moved the chessboarc between both camera's images or the scene has a perspective foreshortening while you chessboard hasn't, which would mean that they are not in the same orientation. – Micka Oct 20 '22 at 11:12
  • @Micka is a 170-degree camera. First I did the camera calibration, then I got the top view. – gfd2 Oct 20 '22 at 11:14
  • @Micka I added the original images – gfd2 Oct 20 '22 at 11:18
  • thx, Now I see that the lines are not perspective foreshortening but they are vertical artifacts in the original images. What's exactly your problem with the stitched image? The registration quality (the chessboard doesn't fit perfectly at the bottom) or that the content doesnt fit perfectly (the diagonal lines going in different directions) or just the visibility of the seam? – Micka Oct 20 '22 at 11:28
  • @Micka Yes, the visibility of the seam – gfd2 Oct 20 '22 at 11:31
  • use cross-blending by setting the pixel in the overlap region to pixel = a*pixelFromImage1 + b*pixelFromImage2 where a+b = 1 and try to set a smaller when closer to image1-border and set b smaller when closer to image2-border. See https://stackoverflow.com/questions/22315904/blending-does-not-remove-seams-in-opencv/22324790#22324790 for inspiration – Micka Oct 20 '22 at 11:44
  • and have a look at: https://stackoverflow.com/questions/67800302/multi-band-blending-makes-seams-brighter-and-more-visible/67826116#67826116 – Micka Oct 20 '22 at 11:46

0 Answers0