5

I have an image of a chessboard taken at some angle. Now I want to warp perspective so the chessboard image look again as if was taken directly from above.

I know that I can try to use 'findHomography' between matched points but I wanted to avoid it and use e.g. rotation data from mobile sensors to build homography matrix on my own. I calibrated my camera to get intrinsic parameters. Then lets say the following image has been taken at ~60degrees angle around x-axis. I thought that all I have to do is to multiply camera matrix with rotation matrix to obtain homography matrix. I tried to use the following code but looks like I'm not understanding something correctly because it doesn't work as expected (result image completely black or white.

enter image description here

import cv2
import numpy as np
import math 



camera_matrix = np.array([[ 5.7415988502105745e+02, 0., 2.3986181527877352e+02],
                           [0., 5.7473682183375217e+02, 3.1723734404756237e+02], 
                           [0., 0., 1.]])

distortion_coefficients = np.array([ 1.8662919398453856e-01, -7.9649812697463640e-01,
   1.8178068172317731e-03, -2.4296638847737923e-03,
   7.0519002388825025e-01 ])

theta = math.radians(60)

rotx = np.array([[1, 0, 0],
               [0, math.cos(theta), -math.sin(theta)],
               [0, math.sin(theta), math.cos(theta)]])   



homography = np.dot(camera_matrix, rotx)


im = cv2.imread('data/chess1.jpg')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)

im_warped = cv2.warpPerspective(gray, homography, (480, 640), flags=cv2.WARP_INVERSE_MAP)
cv2.imshow('image', im_warped)
cv2.waitKey()
pass

I also have distortion_coefficients after calibration. How can those be incorporated into the code to improve results?

pzo
  • 2,087
  • 3
  • 24
  • 42

3 Answers3

3

This answer is awfully late by several years, but here it is ...

(Disclaimer: my use of terminology in this answer may be imprecise or incorrect. Please do look up on this topic from other more credible sources.)


Remember:

  • Because you only have one image (view), you can only compute 2D homography (perspective correspondence between one 2D view and another 2D view), not the full 3D homography.
  • Because of that, the nice intuitive understanding of the 3D homography (rotation matrix, translation matrix, focal distance, etc.) are not available to you.
  • What we say is that with 2D homography you cannot factorize the 3x3 matrix into those nice intuitive components like 3D homography does.
  • You have one matrix - (which is the product of several matrices unknown to you) - and that is it.

However,

OpenCV provides a getPerspectiveTransform function which solves the 3x3 perspective matrix (using homogenous coordinate system) for a 2D homography between two planar quadrilaterals.

Link to documentation

To use this function,

  • Find the four corners of the chessboard on the image. These will be your source coordinates.
  • Supply four rectangle corners of your choice. These will be your destination coordinates.
  • Pass the source coordinates and destination coordinates into the getPerspectiveTransform to generate a 3x3 matrix that is able to dewarp your chessboard to an upright rectangle.

Notes to remember:

  • Mind the ordering of the four corners.

    • If the source coordinates are picked in clockwise order, the destination also needs to be picked in clockwise order.
    • Likewise, if counter-clockwise order is used, do it consistently.
    • Likewise, if z-order (top left, top right, bottom left, bottom right) is used, do it consistently.
    • Failure to order the corners consistently will generate a matrix that executes the point-to-point correspondence exactly (mathematically speaking), but will not generate a usable output image.
  • The aspect ratio of the destination rectangle can be chosen arbitrarily. In fact, it is not possible to deduce the "original aspect ratio" of the object in world coordinates, because "this is 2D homography, not 3D".

rwong
  • 6,062
  • 1
  • 23
  • 51
1

One problem is that to multiply by a camera matrix you need some concept of a z coordinate. You should start by getting basic image warping given Euler angles to work before you think about distortion coefficients. Have a look at this answer for a slightly more detailed explanation and try to duplicate my result. The idea of moving your image down the z axis and then projecting it with your camera matrix can be confusing, let me know if any part of it does not make sense.

Community
  • 1
  • 1
Hammer
  • 10,109
  • 1
  • 36
  • 52
  • Thx, it helped me to understand it better but I don't get still 2 things: 1) in #2 is 'center the image at the origin' necessary? By origin we mean middle of the image? 2) '#4 move the image down the z axis' Does it mean we need distance from camera to plane? Why in your example you have used 'image.rows' for 'z' value? Is it just hardcoded? I also read 'Step by Step Camera Pose Estimation' on StackExchange and I still don't understand what should I put for 'Tz'. I can clarify I don't mind if the rectangles will be scaled. I need just that the angle of rectangles will have 90deggree – pzo Oct 30 '12 at 23:06
  • @user657429 If you want perspective effects to be centered on the center of your image, then the image needs to be centered on 0,0,0 in world space. In opencv the origin starts at the top left. You can either move it manually like I did, or multiplying by the inverse of the camera matrix works too and might be more intuitive. [Here](http://stackoverflow.com/questions/13100573/how-can-i-transform-an-image-using-matrices-r-and-t-extrinsic-parameters-matric/13106130#13106130) is another explanation, does it make the theory more clear? – Hammer Nov 01 '12 at 15:18
0

You do not need to calibrate the camera nor estimate the camera orientation (the latter, however, in this case would be very easy: just find the vanishing points of those orthogonal bundles of lines, and take their cross product to find the normal to the plane, see Hartley & Zisserman's bible for details).

The only thing you need to do is estimate the homography that maps the checkers to squares, then apply it to the image.

Francesco Callari
  • 11,300
  • 2
  • 25
  • 40
  • This is what I'm trying to do: estimate/build the homography but it doesn't work. But I don't want to use findHomography. The reason for this is: 1) I already know rotation matrix from sensor and want to save time for calculation. 2) it might be hard for me to obtain corresponding points (instead of rectangles I might have circles) – pzo Oct 30 '12 at 22:43