1

I calibrated my camera using, separately, ROS, OpenCv and Matlab. People say that I need extrinsic parameters to calculate real distance between pixels in cm from the image. ROS does not provide extrinsic parameters explicitly, it provides a (4,3) projection matrix which is the output of multiplied intrinsic and extrinsic parameters.

Ros camera.yaml file which includes camera parameters

That is why I again calibrated my camera using OpenCv and Matlab to get extrinsic parameters. Although I searched how can I calculate real distance in cm between pixels (i.e from (x1,y1) to (x2,y2)), I could not figure out how to calculate the real distance. Moreover, I did not understand that which parameters to use for distance calculation. I want to use OpenCv to calculate the distance between pixels and write the output as a txt file so that I can use this txt file to move my robot.

For example, here is array of pixel output sample for the path,

array([[  4.484375  , 799.515625  ],
       [ 44.484375  , 487.        ],
       [255.296875  , 476.68261719],
       [267.99707031, 453.578125  ],
       [272.484375  , 306.        ],
       [403.484375  , 300.515625  ],
       [539.484375  , 296.515625  ],
       [589.421875  , 270.00292969],
       [801.109375  , 275.18554688],
       [819.        , 467.515625  ]])

I want to find the real distance in cm between these pixels in an order.

OpenCv Code that calculating parameters:

import numpy as np
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

cbrow = 6
cbcol = 9

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((cbrow*cbcol,3), np.float32)
objp[:,:2] = np.mgrid[0:cbcol,0:cbrow].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

images = glob.glob('C:\Users\Ender\Desktop\CalibrationPhotos\*.jpg')

for fname in images:
    img = cv2.imread(fname)

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (cbcol,cbrow),None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        print "%s: success" % fname
        objpoints.append(objp)

        cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
        imgpoints.append(corners)

        # Draw and display the corners
        cv2.drawChessboardCorners(img, (cbcol,cbrow), corners,ret)
        cv2.imshow('img',img)
        cv2.waitKey(150)

    else:
        print "%s: failed" % fname
        cv2.imshow('img',img)
        cv2.waitKey(1)

cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)

print "mtx"
print mtx
print "dist"
print dist
#print "rvecs"
#print rvecs
#print "tvecs"
#print tvecs
np.savez("CalibData.npz" ,mtx=mtx, dist=dist, rvecs=rvecs, tvecs=tvecs)

#UNDISTROTION
img = cv2.imread('C:\Users\Ender\Desktop\Maze\Maze Images\Partial Maze Images-Raw\Raw7.jpg')
h,  w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))

dst = cv2.undistort(img, mtx, dist, None, newcameramtx)

# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imwrite('calibresult.jpg',dst)

#Re-projection Errors
total_error = 0
for i in xrange(len(objpoints)):
    imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
    total_error += error

print "total error: ", total_error/len(objpoints)

What is the way of doing this ?

Ender Ayhan
  • 308
  • 3
  • 14
  • 1
    What is real distance? What units is it in? How do you calculate depth (distance from focal plane) on a 2D image? – Mad Physicist Sep 09 '18 at 17:16
  • I mean centimeter (cm) by saying real distance. I don't calculate depth since people say that I just need intrinsic and extrinsic parameters. @MadPhysicist – Ender Ayhan Sep 09 '18 at 17:18
  • you can shoot rays from the camera center through the pixels on tge imagr plane, then you know every plssible real world position for that pixel. You need another information about that projected object, like the distance, or an intersection with a planr or something to determine the exact point on the ray. – Micka Sep 09 '18 at 19:23
  • To be able to calculate real world distance between two arbitrary pixels, you need two cameras looking the same points, or depth information of the pixels somehow, or somehow a transform between camera position and some of the pixels in the scene. Without any of these you can not calculate 3d locations of pixels with a single camera. – unlut Sep 09 '18 at 19:57
  • @unlut Appearently, depth information is easier for me. To specify pixel distances, do I need to calculate depth information just for one time ? I mean, for example, If I take an image of a small object with known size and calculate the distance between pixels, can I use this information for another images ? – Ender Ayhan Sep 09 '18 at 20:24
  • 1
    if you know the real size of the object (and the pixel coordinates), you can determine the position of the rays (the place of the object). If you compute a plane from those points, you xan compute intersections of pixel-rays with that plane and measure distances on that plane. – Micka Sep 09 '18 at 20:56
  • @Micka Do you mean that I have to determine the position of the object for every image ? (in my case , it is real-time application) – Ender Ayhan Sep 09 '18 at 21:02
  • 1
    if your camera is static (relative to the plane/space you measured once) you only have to do it once. But probably that's not the case? – Micka Sep 09 '18 at 21:06
  • My camera is, for now, static. I captured lots of calibration images from a fixed altitude. After calculations, I will mount it on a drone. I think you understood me better than anyone. – Ender Ayhan Sep 09 '18 at 21:10
  • How accurately will your drone be able to measure its distance above the surface you're imaging? (You will have to account for tilt as well, I suppose -- say, if it's windy, or the drone (I assume quadcopter) is moving, it won't be looking straight down)... – Dan Mašek Sep 09 '18 at 21:15
  • it is pretty accurate, nearly (+ or - 0,0025m) since I am using lidar. Also using pixhawk cube which has barometer. Drone will fly in indoor environment, so no wind. Yes it is quadcopter and the camera is downward facing. @DanMašek – Ender Ayhan Sep 09 '18 at 21:19
  • I also want to say that I will collect continues images during flight but I will process only one image to get pixel coordinates after image stitching operation with collected images. So, can I assume my altitude (distance to object) and calculate distance between pixels to convert pixel arrays to 3D coordinates ? – Ender Ayhan Sep 09 '18 at 21:24
  • Then you should be able to get a pretty good estimate. Calibrate it at a known height on some known-sized object. Then there should be a linear relationship between the real distance and the spacing of two adjacent pixels. This assumes your drone can keep the camera looking straight down, onto a level surface.. – Dan Mašek Sep 09 '18 at 21:24
  • @DanMašek Briefly, I should do camera calibration (chessboard method) at the altitude that the drone will reach, right ? Because I did calibration at a lower height (1.45 meters). – Ender Ayhan Sep 09 '18 at 21:30
  • As far as I understand it, it doesn't matter at what distance you do the calibration... in fact you should move the chessboard around in the 3D space (tilting and moving forward/back) to get a good measure. What it does is determine the distortion caused by the optics, so that you can undistort the input and get a uniform (linear) image to make measurements on (or stack). The characteristics of the optics will stay the same no matter how far from the imaged surface you are (assuming it's fixed, not some mechanized auto-focus). -- It might be close if anything is adjusted, but best if static. – Dan Mašek Sep 09 '18 at 21:36
  • then I already did it as I mentioned in my post. I am looking an answer for the next step. Is just distance(depth) enough to calculate spacing of pixels ? If so, is there any tutorial because I could not find ? thank you – Ender Ayhan Sep 09 '18 at 21:39
  • 2
    Do a simple experiment -- take an object of known dimensions, and image it from two known heights. Make it large, so that you minimize the error caused by pixel size (quantization) -- make it span most of the image at the lower altitude. Having made that measurement, you can calculate pixel/mm (or whatever unit you care for) along each axis, at two know altitudes. Knowing the relationship is linear, you can calculate a formula to give you pixel size at given altitude. | Once you figure that out, test the algorithm on objects of known size, and see how close it gets. – Dan Mašek Sep 09 '18 at 21:46
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/179717/discussion-between-dan-masek-and-ender-ayhan). – Dan Mašek Sep 09 '18 at 21:52
  • 1
    Possible duplicate of [OpenCV: How-to calculate distance between camera and object using image?](https://stackoverflow.com/questions/14038002/opencv-how-to-calculate-distance-between-camera-and-object-using-image) – stovfl Sep 12 '18 at 19:04
  • @stovfl thank you. it is really handy – Ender Ayhan Sep 13 '18 at 19:29

0 Answers0