1

The purpose of my project is to convert some 2D/Image points into 3D/World coordinates.

In order to determine the image coordinates, I have mapped the exact location of the image in meters/cms in the real world. Please see the image below,

Undistorted image using cv2.undistort 1 All the image points are highlighted in red on the image

We assume the surface is flat therefore z=0, in world coordinates. This allows us to project x and y in 3D space whilst ignoring the Z axis.

I use the following code, mostly consisting of OpenCV functions.

import numpy as np
import cv2

# calibratiob done with cv2 calibtrate to get cam matrix and distortion params
dist_params =  np.array([-2.80467218e-01, 6.67589890e-02, 9.79684e-05, 7.560530e-04, 0])
cam_matrix = np.array([
    [880.27, 0, 804.05388],
    [0.0, 877.2202, 431.85688],
    [0.0, 0.0, 1.0],
    ])
        
# values are in meters
world_points_real = np.array([ # x, y, z
                        
    [4.92,   0.0, 0.0],    
    [4.92,  -1.2, 0.0],  
    [4.92, -2.44, 0.0],
    [4.92, -3.6, 0.0],
    [4.62,  5.66, 0.0],
], dtype=np.float32).reshape((-1,3))


img_points =  np.array([ # u, v
    [  937,  590],
    [ 1076,  597],
    [ 1220,  602],
    [ 1359,  608],
    [  336,  543],,
], dtype=np.float32).reshape((-1,2))



# find rvecs and tvecs using OpenCV solvePnP methods
ret, rvecs, tvecs, dist = cv2.solvePnPRansac(world_points_real, img_points, cam_matrix, dist_params)#, flags=cv2.SOLVEPNP_ITERATIVE)


# # project 3d points to 2d
img_points_project, jac = cv2.projectPoints(np.array([4.62,  5.66, 0.0]), rvecs, tvecs, cam_matrix, dist_params)
print("img_points:", img_points_project) # this should be [  305, 537]

# gives  (-215:Assertion failed) Q.size() == Size(4,4) 
# cv2.reprojectImageTo3D(img_points, r_and_t_vec)


# We assume a flat surface; ie z=0, to do 2d to 3d projection.                       

# Convert redian to Rodrigues
rvecs_rod, _ = cv2.Rodrigues(rvecs)

# create (3,4) shaped r&t_vec
r_and_t_vec = np.zeros((3,4))
r_and_t_vec[:,:-1] = rvecs_rod
r_and_t_vec[:,3] = tvecs.reshape(-1)


# find scaling factor 
# r and t vector times any world coordinate point [x, y, z, 1]
sacling_factor = np.dot(r_and_t_vec, np.array([4.92,   0.0, 0.0 ,1]).reshape((4,1)))


#drop r3
r_and_t_vec_nor3 = np.delete(r_and_t_vec,2,1) # since z = 0, we take out r3

for i in range(len(img_points)):
    
    # 2D points
    uv1 = np.array([img_points[i][0], img_points[i][1], 1])
    
    # Homography matrix
    mat2 = np.dot(cam_matrix, r_and_t_vec_nor3)
    
    # Inverse it 
    inv_mat2 = np.linalg.inv(mat2) 
    
    # multiply with uv1
    result2 = np.dot(inv_mat2, uv1) * sacling_factor[2]
    
    print("wprld_points:", result2) # this should be same as img_points

However, the projection is off by 2-3 meters. ChatGpt solution can only do 3D to 2D projection using cv2.projectpoints.

I tried using CV2 solvepnp to get the Rvecs and Tvecs. Then use them to generate the (3, 4) projection matrix. I use the inverse of the projection matrix to get the 3D projection given 2D image points. The rvecs and rvecs from solvepnp are below,

r_vecs = array([[ 1.24420712],
       [-1.23779433],
       [ 1.33559117]])

t_vecs = array([[1.54571503],
       [1.46783797],
       [2.78172814]])

I was expecting the projected 3D points to be close to the world_points_real but they are 2-3 meters off. I have tried with more points with no improvements. Where is the error coming from?

Reference: 1. SolvePnP in C++

Farshid Rayhan
  • 1,134
  • 4
  • 17
  • 31
  • what rvec and tvec do you get from solvePnP? -- world/model points and image points look sensible. you appear to have applied X forward, Y left, Z up for the vehicle's coordinate system. – Christoph Rackwitz Feb 27 '23 at 21:58
  • I get rvecs in redian. Yes, that's the orientation we selected for the vehicle. SolvePnP definition: This function returns the rotation and the translation vectors that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame – Farshid Rayhan Feb 27 '23 at 23:21
  • 1
    That street doesn't look flat, and it also doesn't seem horizontal: looking at the building on the left, it seems that the street goes up a bit. – Cris Luengo Feb 27 '23 at 23:48
  • I am expecting a few 100s of cm off due to the ground being not flat but the projection seems to be off by 2.55m avg – Farshid Rayhan Feb 28 '23 at 01:00
  • I have also used an undistorted image and used that image to get new image points. The projection got worse. Off by 5m avg – Farshid Rayhan Feb 28 '23 at 01:01
  • 1
    when I ask for "what...", I was expecting you to state the result values you got. do not convert anything because the conversion could be wrong. I did not ask you to state what solvePnP does. -- "a few 100s of cm" is a few meters. that should be unacceptable in any case, and can't be attributed to "noise/error". – Christoph Rackwitz Feb 28 '23 at 01:44
  • Are there any mistakes in `world_points_real`? The 4 red points on the right side seem to be lined up at regular intervals on the ground. But `world_points_real` are not so. – fana Feb 28 '23 at 02:54
  • To the best of my knowledge, the 'world_points_real' is with the least amount of mistakes. They might be off by 1-3 cms at best – Farshid Rayhan Feb 28 '23 at 11:22
  • The distance from the 1st to 2nd and from the 2nd to 3rd is about 1.2, and from the 3rd to 4th is double that. However, I cannot see such a positional relationship on the image... – fana Mar 01 '23 at 01:11

0 Answers0