I am trying to repeat this code here to have real world coordinates on Python. But the results doesn't coinside.
My code :
uvPoint = np.matrix('222.84; 275.05; 1')
rots = cv2.Rodrigues(_rvecs[18])[0]
rot = np.linalg.inv(rots)
cam = np.linalg.inv(camera_matrix)
leftSideMat = rot * cam * uvPoint;
rightSideMat = rot * _tvecs[18];
s = np.true_divide((4.52 + rightSideMat), leftSideMat)
rot * (s * cam * uvPoint - _tvecs[18])
my camera matrix
array([[613.87755242, 0. , 359.6984484 ],
[ 0. , 609.35282925, 242.55955439],
[ 0. , 0. , 1. ]])
rotation matrix
array([[ 0.73824258, 0.03167042, 0.67379142],
[ 0.13296486, 0.97246553, -0.19139263],
[-0.66130042, 0.23088477, 0.71370441]])
and translation vector
array([[-243.00462163],
[ -95.97464544],
[ 935.8852482 ]])
I don't know what is Zconst, but whatever I try for z constant I can't even get close to real world coordinates which is (36, 144). What am I doing wrong here?