I have a real/physical stick with an IR camera attached to it and some IR LED that forms a pattern that I'm using in order to make a virtual stick move in the same way as the physical one.
For that, I'm using OpenCV in Python and send a rotation and translation vector calculated by solvePnP to Unity.
I'm struggling to understand how I can use the results given by the solvePnP function into my 3D world.
So far what I did is: using the solvePnP function to get the rotation and translation vectors. And then use this rotation vector to move my stick in the 3d World
transform.rotation = Quaternion.Euler(new Vector3(x, z, y));
It seems to work okay when my stick is positioned at a certain angle and if I move slowly...but most of the time it moves everywhere.
By looking for answers online, most of people are doing several more steps after solvePnP - from what I understand:
Using Rodrigues to convert the rotation vector to a rotation matrix
Copy the rotation matrix and translation vector into a extrinsic matrix
- Inverse the extrinsic matrix
I understand that these steps are necessary if I was working with matrix like in OpenGL - but what about Unity3D? Are these extra steps necessary? Or can I directly use the vectors given by the solvePnP function (which I doubt as the results I'm having so far aren't good).