6

I'm using "OpenCV for Unity3d" asset (it's the same OpenCV package for Java but translated to C# for Unity3d) in order to create an Augmented Reality application for my MSc Thesis (Computer Science).

So far, I'm able to detect an object from video frames using ORB feature detector and also I can find the 3D-to-2D relation using OpenCV's SolvePnP method (I did the camera calibration as well). From that method I'm getting the Translation and Rotation vectors. The problem occurs at the augmentation stage where I have to show a 3d object as a virtual object and update its position and rotation at each frame. OpenCV returns Rodrigues Rotation matrix, but Unity3d works with Quaternion rotation so I'm updating object's position and rotation wrong and I can't figure it out how to implement the conversion forumla (from Rodrigues to Quaternion).

Getting the rvec and tvec:

    Mat rvec = new Mat();
    Mat tvec = new Mat();
    Mat rotationMatrix = new Mat ();

    Calib3d.solvePnP (object_world_corners, scene_flat_corners, CalibrationMatrix, DistortionCoefficientsMatrix, rvec, tvec);
    Calib3d.Rodrigues (rvec, rotationMatrix);

Updating the position of the virtual object:

    Vector3 objPosition = new Vector3 (); 
    objPosition.x = (model.transform.position.x + (float)tvec.get (0, 0)[0]);
    objPosition.y = (model.transform.position.y + (float)tvec.get (1, 0)[0]);
    objPosition.z = (model.transform.position.z - (float)tvec.get (2, 0)[0]);
    model.transform.position = objPosition;

I have a minus sign for the Z axis because when you convert OpenCV's to Unty3d's system coordinate you must invert the Z axis (I checked the system coordinates by myself).

Unity3d's Coordinate System (Green is Y, Red is X and Blue is Z) :

enter image description here

OpenCV's Coordinate System:

enter image description here

In addition I did the same thing for the rotation matrix and I updated the virtual object's rotation.

p.s I found a similar question but the guy who asked for it he did not post clearly the solution.

Thanks!

techguy18985
  • 414
  • 1
  • 6
  • 12

1 Answers1

9

You have your rotation 3x3 matrix right after cv::solvePnP. That matrix, since it is a rotation, is both orthogonal and normalized. Thus, columns of that matrix are in order from left to right :

  1. Right vector (on X axis);
  2. Up vector (on Y axis);
  3. Forward vector (on Z axis).

OpenCV uses a right-handed coordinates system. Sitting on camera looking along optical axis, X axis goes right, Y axis goes downward and Z axis goes forward.

You pass forward vector F = (fx, fy, fz) and up vector U = (ux, uy, uz) to Unity. These are the third and second columns respectively. No need to normalize; they are normalized already.

In Unity, you build your quaternion like this:

Vector3 f; // from OpenCV
Vector3 u; // from OpenCV

// notice that Y coordinates here are inverted to pass from OpenCV right-handed coordinates system to Unity left-handed one
Quaternion rot = Quaternion.LookRotation(new Vector3(f.x, -f.y, f.z), new Vector3(u.x, -u.y, u.z));

And that is pretty much it. Hope this helps!

EDITED FOR POSITION RELATED COMMENTS

NOTE : Z axis in OpenCV is on camera's optical axis which goes through image near center but not exactly at center in general. Among your calibration parameters, there are Cx and Cy parameters. These combined are the 2D offset in image space from center to where the Z axis goes through image. That shift must be taken into account to map exactly 3D stuff over 2D background.

To get proper positioning in Unity:

// STEP 1 : fetch position from OpenCV + basic transformation
Vector3 pos; // from OpenCV
pos = new Vector3(pos.x, -pos.y, pos.z); // right-handed coordinates system (OpenCV) to left-handed one (Unity)

// STEP 2 : set virtual camera's frustrum (Unity) to match physical camera's parameters
Vector2 fparams; // from OpenCV (calibration parameters Fx and Fy = focal lengths in pixels)
Vector2 resolution; // image resolution from OpenCV
float vfov =  2.0f * Mathf.Atan(0.5f * resolution.y / fparams.y) * Mathf.Rad2Deg; // virtual camera (pinhole type) vertical field of view
Camera cam; // TODO get reference one way or another
cam.fieldOfView = vfov;
cam.aspect = resolution.x / resolution.y; // you could set a viewport rect with proper aspect as well... I would prefer the viewport approach

// STEP 3 : shift position to compensate for physical camera's optical axis not going exactly through image center
Vector2 cparams; // from OpenCV (calibration parameters Cx and Cy = optical center shifts from image center in pixels)
Vector3 imageCenter = new Vector3(0.5f, 0.5f, pos.z); // in viewport coordinates
Vector3 opticalCenter = new Vector3(0.5f + cparams.x / resolution.x, 0.5f + cparams.y / resolution.y, pos.z); // in viewport coordinates
pos += cam.ViewportToWorldPoint(imageCenter) - cam.ViewportToWorldPoint(opticalCenter); // position is set as if physical camera's optical axis went exactly through image center

You put images retrieved from physical camera right in front of virtual camera centered on its forward axis (scaled to fit frustrum) then you have proper 3D positions mapped over 2D background!

RCYR
  • 1,452
  • 13
  • 28
  • Now all make sense :) By the way I think that my translation vector is wrong. The 3d object is moving wrong in the scene! What I'm doing wrong there? – techguy18985 Apr 12 '16 at 18:35
  • 2
    For position, invert the Y coordinate as you must do for orientation vectors. One really important thing to note though : Z axis is the camera's optical axis which does not go through image center in general. Among your calibration parameters, there are Cx and Cy parameters. These combined are the 2D offset in image space from image center that the camera's optical axis (Z axis) goes through in image. Really important to map 3D stuff over 2D background for AR applications! – RCYR Apr 12 '16 at 21:16
  • so what modifications must be done in order to get it work correctly? After camera calibration, I got camera's intrinsic parameters. How I have to use them in order to position correctly the virutal object? – techguy18985 Apr 12 '16 at 21:31
  • I really appreciate your help and the time you spent explaining those things to me! A last question :) : What is the measurement unit that Unity3D uses? My 3D coordinates are initialized using cm (centimeters). – techguy18985 Apr 12 '16 at 23:14
  • 2
    1 unitless unit in Unity is generally interpreted as 1 meter in games. You can scale the position as you need. – RCYR Apr 12 '16 at 23:21
  • By the way the frustum is too small in order to fit the background. In addition where i have to position my 2d background? I can't tell if the cube's (virtual object) 3d position is right or not. Here are 2 links with the example images of my unity project : [Picture 1](http://s24.postimg.org/lmsbrd34l/example1.png) and [Pciture 2](http://s24.postimg.org/jjhwjp3bp/example2.png) – techguy18985 Apr 14 '16 at 16:06
  • FOV of 0.5417 is way too small. You probably have it in radians but you need it in degrees. Additionally, resize the background quad to perfectly fit frustrum at some given distance most likely near far clip plane. – RCYR Apr 14 '16 at 18:46
  • Is it possible to move the camera with the 2d background (quad) instead the virtual object (cube) ?? How can I achieve this (I think that is possible)? – techguy18985 May 18 '16 at 10:06
  • Is there any reason why you move the camera instead of setting the Lens Shift properties of the camera object? – vwvw Dec 20 '19 at 10:27
  • There was no lens shift property back then on cameras. – RCYR Dec 20 '19 at 22:31