0

I am trying to repeat this code here to have real world coordinates on Python. But the results doesn't coinside.

My code :

uvPoint  = np.matrix('222.84; 275.05; 1')
rots = cv2.Rodrigues(_rvecs[18])[0]
rot = np.linalg.inv(rots)
cam = np.linalg.inv(camera_matrix) 

leftSideMat  = rot * cam * uvPoint;
rightSideMat = rot * _tvecs[18];

s = np.true_divide((4.52 + rightSideMat), leftSideMat)

rot * (s * cam * uvPoint - _tvecs[18])

my camera matrix

array([[613.87755242,   0.        , 359.6984484 ],
       [  0.        , 609.35282925, 242.55955439],
       [  0.        ,   0.        ,   1.        ]])

rotation matrix

array([[ 0.73824258,  0.03167042,  0.67379142],
       [ 0.13296486,  0.97246553, -0.19139263],
       [-0.66130042,  0.23088477,  0.71370441]])

and translation vector

array([[-243.00462163],
       [ -95.97464544],
       [ 935.8852482 ]])

I don't know what is Zconst, but whatever I try for z constant I can't even get close to real world coordinates which is (36, 144). What am I doing wrong here?

  • use Q matrix obtained form Stereocalibration process to do the 3D reprojection. see https://stackoverflow.com/questions/27374970/q-matrix-for-the-reprojectimageto3d-function-in-opencv – Dr Yuan Shenghai Nov 24 '20 at 15:31
  • I am using single camera. Would that work for it? – T'Lan Imass Nov 24 '20 at 16:06
  • where do you get depth? if no depth. any pixel in image representing a ray/line instead of a 3D point – Dr Yuan Shenghai Nov 24 '20 at 20:19
  • What I am trying to do is like this: I put a chessboard on a surface, and get a rotation and translation matrix. This way I can measure planar objects on the same surface. This can be achieved on Matlab using pointstoWorld. But my project needs it to be done on python. – T'Lan Imass Nov 25 '20 at 06:28

1 Answers1

0

based on your comment. i think what you want is the pose estimation with known camera projection matrix.

You should check this link out for python implementation of what you want. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_pose/py_pose.html

enter image description here

Edit

From your code. it seems your conversion of left and right matrix is correct. But are you sure the input rotation and translation is correct? can you try to plot a box following this tutorial on the original image based on this rotation and translation? If the box coincides with the 2D projected pattern in your data.

If possible. please post the original image that you are using.

Edit

Please check on the original post where scale s is calculated based on

element in which is the 3rd position in the output vector. rightSideMat.at(2,0))/leftSideMat.at(2,0))

and you are doing it as

output vector of rightSideMat), leftSideMat)

Try to do it with the same element operation. By right, scale S should be a float element, not a matrix.

Dr Yuan Shenghai
  • 1,849
  • 1
  • 6
  • 19
  • I checked this, infact I am reasearching a lot of things. This pose estimation turns a real world points to image points but not the other way around. I still don't know why there is no code for that. – T'Lan Imass Nov 25 '20 at 07:52
  • Your last edit solved the problem. Thank you very very much. It WAS a simple type problem. – T'Lan Imass Nov 25 '20 at 11:18