There are Google results and stackoverflow posts that appear to answer this question, but the simple fact is I can't understand them. No matter how much I read, I can't get my head around quaternions and Euler angles and Rodriguez transforms and all that.
Anyone up for explaining to me like I'm 12 how to get from the rotation and translation vectors returned by OpenCV's solvePnP() method to xyz Position and xyz Rotation values that I can plug into a 3d graphics application?
This is for an augmented reality application in c++. I have the OpenCV part up and running, tracking a markerboard and sending rotation and translation vectors to the graphics program, but I have no idea how to use that information to pose my 3d object.
I really do want to understand the math behind this, and will appreciate any patient explanation of the theory here. But I'll also settle for a pointer to some code I can copy/paste and learn the theory another day. In fact I seem to learn stuff like this faster by seeing concrete code and working backward to the theory.
EDIT: Like there's this... which clearly should point me in the right direction, but it may as well be the plans for a time machine for all I know. It's occurred to me that I may be asking for remedial high school math, but it can't be the first time.
EDIT: Here's an example of the rotation vector and translation vector being returned from solvePnP()... converted into XML for the trip to the graphics application. Note that every one of these I've seen contains three rows and one column.
<Tvec type_id="opencv-matrix">
<rows>3</rows>
<cols>1</cols>
<dt>f</dt>
<data>
-2.50094433e+01 -6.59909010e+00 1.07882790e+02
</data>
</Tvec>
<Rvec type_id="opencv-matrix">
<rows>3</rows>
<cols>1</cols>
<dt>f</dt>
<data>
-1.92100227e+00 -2.11300254e-01 2.80715879e-02
</data>
</Rvec>