0

Currently, I am trying to implement a camera pose tracking system. I have a set of model coordinate points(3D) of the previous frame and the image coordinates(2D) of the current frame.

I have also set an identity matrix(4x4) as the initial pose of the camera. For each new rvec and tvec calculated, I convert the rvec into a 3x3 Rotaton Matrix using rodrigues and then create a 4x4 homogenous transformation matrix and multiply it with the previous pose.

The rotation seems to be working properly, but the translation vector from the matrix seems to be moving only when rotated and not when translated, i.e it represents rotation.

Could it be that both my model coordinates and image coordinate systems are the same?

Edit: I am attempting to track the camera position using 3D-2D correspondences( Visual Odometry ) with a RGBD camera.

Update: Solved the issue. I seemed to be taking the wrong column of the Transformation matrix

  • no, cant be, since the first one is a 3d space and the 2nd one is a 2d space. What do you mean by "model coordinates of the previous frame"? What do you mean by "initial pose of the camera"? In solvePnp, the model pose is estimated for a static camera (which afterwards can be converted to the inverted scenario). – Micka Oct 02 '19 at 17:52
  • @Micka I am attempting to track the camera position over a period of time, therefore initial pose just means the starting position, i.e identity. I am using correspondences between the previous frame and the present frame to find how much the camera has moved in x amount of time. – Srinath Rao Oct 07 '19 at 03:22
  • make sure that you are converting the solvePnp result to camera transformations instead of object transformations. Can you share your code? – Micka Oct 07 '19 at 04:35
  • 1
    @Micka I have solved the error. Was taking the penultimate column of the transformation matrix by accident .Things seem to be in order now :D Thanks! – Srinath Rao Oct 07 '19 at 09:38

0 Answers0