I am using a camera calibration routine and I want to calibrate a camera with large set of images.
Code: (from here)
import numpy as np
import cv2
import glob
import argparse
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
def calibrate():
height = 8
width = 10
""" Apply camera calibration operation for images in the given directory path. """
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(8,6,0)
objp = np.zeros((height*width, 3), np.float32)
objp[:, :2] = np.mgrid[0:width, 0:height].T.reshape(-1, 2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# Get the images
images = glob.glob('thermal_final set/*.png')
# Iterate through the pairs and find chessboard corners. Add them to arrays
# If openCV can't find the corners in an image, we discard the image.
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (width, height), None)
# If found, add object points, image points (after refining them)
if ret:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
# Show the image to see if pattern is found ! imshow function.
img = cv2.drawChessboardCorners(img, (width, height), corners2, ret)
e1 = cv2.getTickCount()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
e2 = cv2.getTickCount()
t = (e2 - e1) / cv2.getTickFrequency()
print(t)
return [ret, mtx, dist, rvecs, tvecs]
if __name__ == '__main__':
ret, mtx, dist, rvecs, tvecs = calibrate()
print("Calibration is finished. RMS: ", ret)
Now, the problem is that the time that cv2.calibratecamera() takes, based on number of points(derived from images) used.
Result with 40 images:
9.34462341234 seconds
Calibration is finished. RMS: 2.357820395255311
Result with 80 images:
66.378870749 seconds
Calibration is finished. RMS: 2.864052963156834
The time taken increases exponentially with increase in images.
Now, I have a really huge set of images (500).
I have tried calibrating camera with points from a single image and then calculating average of all the results I get, but they are different than what I get from this method.
Also, I am sure that my setup is using optimized OpenCV, check using:
print(cv2.useOptimized())
How do I make this process faster? Can I leverage threads here?
Edit: Updated the concept and language from "calibrating images" to "calibrating camera using images"