0

i'm trying to map my OpenNI (1.5.4.0) Kinect 4 Windows Depthmap to a OpenCV RGB image.

i have the Depthmap 640x480 with depth in mm an was trying to do the mapping like Burrus: http://burrus.name/index.php/Research/KinectCalibration

i skipped the distortion part but otherwise i did everything i think:

//with depth camera intrinsics, each pixel (x_d,y_d) of depth camera can be projected
//to metric 3D space. with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera. 

P3D.at<Vec3f>(y,x)[0] = (x - cx_ir) * depth/fx_ir;
P3D.at<Vec3f>(y,x)[1] = (y - cy_ir) * depth/fy_ir;
P3D.at<Vec3f>(y,x)[2] = depth;


//P3D' = R.P3D + T:
RTMat = (Mat_<float>(4,4) << 0.999388, -0.00796202, -0.0480646, -3.96963,
0.00612322, 0.9993536, 0.0337474, -22.8512,
0.0244427, -0.03635059, 0.999173, -15.6307,
0,0,0,1);

perspectiveTransform(P3D, P3DS, RTMat);

//reproject each 3D point on the color image and get its color:  
depth = P3DS.at<Vec3f>(y,x)[2];
x_rgb =  (P3DS.at<Vec3f>(y,x)[0] * fx_rgb/ depth + cx_rgb;
y_rgb = (P3DS.at<Vec3f>(y,x)[1] * fy_rgb/ depth + cy_rgb;

But with my estimated calibration values for the RGB Camera and the IR Camera of the Kinect my result fails in every direction and cannot be fixed only with changing the extrinsic T Parameters.

I have a few suspisions:

  • does OpenNi already map the IR Depthmap to the RGB Camera of the Kinect?
  • Should i use depth in meters and or transform the pixels into mm? (i tried by multiplying with pixel_size * 0.001 but i got the same results)

Really hope someone can help me. Thx in advance.

ddd
  • 481
  • 6
  • 17

1 Answers1

1

AFAIK OpenNI does it's own registration (factory setting) and you can toggle registration as well. If you've built OpenCV with OpenNI support it's as simple as this:

capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);

As explained here and there's a minimal OpenNI/OpenCV example here. So a minimal working sample would look like so:

#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"

#include <iostream>

using namespace cv;
using namespace std;

int main(){
    VideoCapture capture;
    capture.open(CV_CAP_OPENNI);
    //registration
    if(capture.get( CV_CAP_PROP_OPENNI_REGISTRATION ) == 0) capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);

    if( !capture.isOpened() ){
        cout << "Can not open a capture object." << endl;
        return -1;
    }
    cout << "ready" << endl;

    for(;;){
        Mat depthMap,depthShow;
        if( !capture.grab() ){
            cout << "Can not grab images." << endl;
            return -1;
        }else{
            if( capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) ){
                const float scaleFactor = 0.05f;
                depthMap.convertTo( depthShow, CV_8UC1, scaleFactor );
                imshow("depth",depthShow);
            }
        }
        if( waitKey( 30 ) == 27 )    break;//esc to exit
    }

}

If you don't have OpenCV built with OpenNI support, you should be able to use GetAlternativeViewPointCap()

Community
  • 1
  • 1
George Profenza
  • 50,687
  • 19
  • 144
  • 218
  • hi.. thx for your answer. it looks like the depthmap is automatically mapped to the RGB Kinect Camera. if i understood you correctly that is what you were saying too. i'm not using opencv to get my depthmap and i have tried to alter it with GetAlternativeViewPointCap() (it is supported) but when i try to set it to the IR Camera i get: "The value is invalid!" code: IRGenerator ir; context.FindExistingNode(XN_NODE_TYPE_IR, ir); depth.GetAlternativeViewPointCap().SetViewPoint(ir); – ddd Jun 27 '13 at 10:18
  • Normally you'd use the depth map and the rgb map. How do you plan on using the IR map ? – George Profenza Jun 27 '13 at 10:25
  • no i use the depthmap.. but since it is filmed with the ir camera (the depthmap) i tought i use the ir camera intrinsics and extrinsic parameters to map the depthmap on the rgb image of my second non kinect camera.. – ddd Jun 27 '13 at 14:08
  • If you want to use a second, non kinect rgb camera, yes you would need to do manual calibration . Have a look at [RGBDemo](http://labs.manctl.com/rgbdemo/index.php/Documentation/TutorialProjectorKinectCalibration) or better yet [RGBDToolkit](http://www.rgbdtoolkit.com). From your original question I couldn't work out that you are using another camera alongside your kinect – George Profenza Jun 27 '13 at 14:47
  • thx for your answer. although that wasn't the problem for me it is now resolved.. (an error in the calbration method, principal point made it that the mapping was so far off in the x-axis) – ddd Jun 28 '13 at 15:20