5

I am using OpenCV to estimate a webcam's intrinsic matrix from a series of chessboard images - as detailed in this tutorial, and reverse project a pixel to a direction (in term of azimuth/elevation angles).

The final goal is to let the user select a point on the image, estimate the direction of this point in relation to the center of the webcam, and use this as DOA for a beam-forming algorithm.

So once I have estimated the intrinsic matrix, I reverse project the user-selected pixel (see code below) and display it as azimuth/elevation angles.

result = [0, 0, 0]  # reverse projected point, in homogeneous coord.
while 1:
    _, img = cap.read()
    if flag:  # If the user has clicked somewhere
        result = np.dot(np.linalg.inv(mtx), [mouse_x, mouse_y, 1])
        result = np.arctan(result)  # convert to angle
        flag = False

    cv2.putText(img, '({},{})'.format(mouse_x, mouse_y), (20, 440), cv2.FONT_HERSHEY_SIMPLEX,
                0.5, (0, 255, 0), 2, cv2.LINE_AA)
    cv2.putText(img, '({:.2f},{:.2f})'.format(180/np.pi*result[0], 180/np.pi*result[1]), (20, 460),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2, cv2.LINE_AA)

    cv2.imshow('image', img)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

My problem is that I'm not sure whether my results are coherent. The major incoherence is that, the point of the image corresponding to the {0,0} angle is noticeably off the image center, as seen below (camera image has been replaced by a black background for privacy reasons) : Reverse projected center point

I don't really see a simple yet efficient way of measuring the accuracy (the only method I could think of was to use a servo motor with a laser on it, just under the camera and point it to the computed direction).

Here is the intrinsic matrix after calibration with 15 images :

Intrisic matrix

I get an error of around 0.44 RMS which seems satisfying.

My calibration code :

nCalFrames = 12  # number of frames for calibration
nFrames = 0
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)  # termination criteria

objp = np.zeros((9*7, 3), np.float32)
objp[:, :2] = np.mgrid[0:9, 0:7].T.reshape(-1, 2)
objpoints = []  # 3d point in real world space
imgpoints = []  # 2d points in image plane.

cap = cv2.VideoCapture(0)
previousTime = 0
gray = 0

while 1:
    # Capture frame-by-frame
    _, img = cap.read()

    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (9, 7), None)

    # If found, add object points, image points (after refining them)
    if ret:

        corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)

        if time.time() - previousTime > 2:
            previousTime = time.time()
            imgpoints.append(corners2)
            objpoints.append(objp)
            img = cv2.bitwise_not(img)
            nFrames = nFrames + 1

        # Draw and display the corners
        img = cv2.drawChessboardCorners(img, (9, 7), corners, ret)

    cv2.putText(img, '{}/{}'.format(nFrames, nCalFrames), (20, 460), cv2.FONT_HERSHEY_SIMPLEX,
                2, (0, 255, 0), 2, cv2.LINE_AA)
    cv2.putText(img, 'press \'q\' to exit...', (255, 15), cv2.FONT_HERSHEY_SIMPLEX,
                0.5, (0, 0, 255), 1, cv2.LINE_AA)
    # Display the resulting frame
    cv2.imshow('Webcam Calibration', img)
    if nFrames == nCalFrames:
        break

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

RMS_error, mtx, disto_coef, _, _ = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

EDIT: another test method would be to use a whiteboard with known angles points and estimate the error by comparing with experimental results, but I don't know how to set up such a system

Florent
  • 121
  • 4
  • 11
  • See these [answers](https://stackoverflow.com/questions/12794876/how-to-verify-the-correctness-of-calibration-of-a-webcam) to check the accuracy of the camera calibration. – Catree Apr 05 '18 at 11:21
  • Yeah I already saw that, and my RMS error looks satisfying but it's for the reverse projection (az/el estimation) that I would like to know if my code is correct... – Florent Apr 06 '18 at 03:34
  • I might be off base here (it's been a while), but don't you also need to multiply your result by the inverse of your [R|t] matrix before you perform the arctan function? I think you have your point in metrical camera coordinates, but not world coordinates. – Saedeas Apr 06 '18 at 20:53
  • You're right, if I wanted the point in world coordinate I would need the extrinsic matrix, but considering I just want a direction vector, I thought that the image point in metrical coordinate would be enough : my vector goes from the camera origin to this point, and it's az/el angle is independant from extrensic parameters (correct me if I'm wrong). Move the camera around, and the ray going through the pixel won't change direction – Florent Apr 07 '18 at 03:20

1 Answers1

2

Regarding your first concern, it is normal to have the principal point off the image center. The estimated point, which is the point of zero elevation and azimuth, is the one that minimizes the radial distortion coefficients, and for a low value wide angle lens (e.g., that of a typical webcam) it can be easily off by noticeable amount.

Your calibration should be ok up to the call to calibrateCamera. However, in your code snippet it seems your ignoring the distortion coefficients. What is missing is initUndistortRectifyMap, which lets you also re-center the principal point if that matters.

h,  w = img.shape[:2]
# compute new camera matrix with central principal point
new_mtx,roi = cv2.getOptimalNewCameraMatrix(mtx,disto_coef,(w,h),1,(w,h))
print(new_mtx)
# compute undistort maps
mapx,mapy = cv2.initUndistortRectifyMap(mtx,disto_coef,None,new_mtx,(w,h),5)

It essentially makes focal length equal in both dimensions and centers the principal point (see OpenCV python documentation for parameters).

Then, at each

_, img = cap.read()

you must undistort the image before rendering

# apply the remap
img = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)
# crop the image
x,y,w,h = roi
img = img[y:y+h, x:x+w]

here, I put background to green to emphasize the barrel distortion. The output could be something like this (camera image replaced by checkerboard for privacy reasons):

distortion

If you do all these, your calibration target is accurate and your calibration samples fill the entire image area you should be quite confident of the computation. However, to validate the measured azimuth and elevation with respect to the undistorted image's pixel readings, I'd maybe suggest tape measure from the lenses first principal point and a calibration plate placed in normal angle right in front of the camera. There you can compute the expected angles and compare.

Hope this helps.

Florent
  • 121
  • 4
  • 11
mainactual
  • 1,625
  • 14
  • 17
  • Thanks a lot for this answer! it's true that I did not apply the distortion correction, as I thought distortion coefficient were part of the intrinsic matrix and that the reverse projection would thus be correct. If you allow me, I will edit your answer later to change the code back to python, just for more coherence with the question! – Florent Apr 09 '18 at 11:00