19

I'm working with Kinect sensor and I'm trying to align depth and color frames so that I can save them as images which "fit" into each other. I've spent a lot of time going through msdn forums and modest documentation of Kinect SDK and I'm getting absolutely nowhere.

Based on this answer: Kinect: Converting from RGB Coordinates to Depth Coordinates

I have the following function, where depthData and colorData are obtained from NUI_LOCKED_RECT.pBits and mappedData is the output containing new color frame, mapped to depth coordinates:

bool mapColorFrameToDepthFrame(unsigned char *depthData, unsigned char* colorData, unsigned char* mappedData)
{
    INuiCoordinateMapper* coordMapper;

    // Get coordinate mapper
    m_pSensor->NuiGetCoordinateMapper(&coordMapper);

    NUI_DEPTH_IMAGE_POINT* depthPoints = new NUI_DEPTH_IMAGE_POINT[640 * 480];

    HRESULT result = coordMapper->MapColorFrameToDepthFrame(NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, NUI_IMAGE_RESOLUTION_640x480, 640 * 480, reinterpret_cast<NUI_DEPTH_IMAGE_PIXEL*>(depthData), 640 * 480, depthPoints);
    if (FAILED(result))
    {
        return false;
    }    

    int pos = 0;
    int* colorRun = reinterpret_cast<int*>(colorData);
    int* mappedRun = reinterpret_cast<int*>(mappedData);

    // For each pixel of new color frame
    for (int i = 0; i < 640 * 480; ++i)
    {
        // Find the corresponding pixel in original color frame from depthPoints
        pos = (depthPoints[i].y * 640) + depthPoints[i].x;

        // Set pixel value if it's within frame boundaries
        if (pos < 640 * 480)
        {
            mappedRun[i] = colorRun[pos];
        }
    }

    return true;
}

All I get when running this code is an unchanged color frame with removed (white) all pixels where depthFrame had no information.

Community
  • 1
  • 1
jaho
  • 4,852
  • 6
  • 40
  • 66
  • Have you checked out the Green Screen example in the Kinect for Windows coding examples? http://kinectforwindows.codeplex.com/. It aligns color and depth. – Nicholas Pappas Apr 15 '13 at 13:44
  • Yes I have. It doesn't use the new `INuiCoordinateMapper`, but an older method `INuiSensor::NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution`. I've tried it and it doesn't work for me either (I get all white image). Somehow the array of depth values they get is USHORT (16 bit) and mine is 32 bit, with the possible reason being that I initialize my Kinect sensor with different parameters (depth only no player index). Even if I create an array of 16 bit depth values from the 32 bit one the function doesn't work for me. – jaho Apr 15 '13 at 14:30
  • similar thing was solved here: http://stackoverflow.com/a/19905805/2521214 Kinect SDK has functions for allign the images but they did not worked for me at all (have an very old version of kinect) so i did it myself ... in that link is my kinect calibration data for yours you have to measure it yourself – Spektre Feb 26 '14 at 08:19
  • @Spektre It is not the same thing, as there the views are taken by the same camera. Mapping RGB to depth can't be done precisely, as the images are taken from a different viewpoints and thus may not even see the same thing (imagine a sheet of paper held between the cameras - each camera will see the other side of the paper and will be unable to align with the other view, no matter what). This is solved for objects "far from the cameras" by camera calibration and reprojection, not an easy problem (but fun to solve). I'd recommend using a function from a SDK (mentioned in posts below). – the swine May 16 '14 at 19:11
  • 1
    Did you find a good answer for this? – Huá dé ní 華得尼 May 03 '15 at 03:34
  • did you fount any working solution? maybe some guidance? – ThunderWiring Sep 29 '16 at 13:03

5 Answers5

2

With the OpenNI framework there an option call registration.

IMAGE_REGISTRATION_DEPTH_TO_IMAGE – The depth image is transformed to have the same apparent vantage point as the RGB image.

OpenNI 2.0 and Nite 2.0 works very well to capture Kinect information and there a lot of tutorials.

You can have a look to this :

Kinect with OpenNI

And OpenNi have a example in SimplerViewer that merge Depth and Color maybe you can just look on that and try it.

Vuwox
  • 2,331
  • 18
  • 33
0

This might not be the quick answer you're hoping for, but this transformation is done successfully within the ofxKinectNui addon for openFrameworks (see here).

It looks like ofxKinectNui delegates to the GetColorPixelCoordinatesFromDepthPixel function defined here.

Tim MB
  • 4,413
  • 4
  • 38
  • 48
0

I think the problem is that you're calling MapColorFrameToDepthFrame, when you should actually call MapDepthFrameToColorFrame.

The smoking gun is this line of code:

mappedRun[i] = colorRun[pos];

Reading from pos and writing to i is backwards, since pos = depthPoints[i] represents the depth coordinates corresponding to the color coordinates at i. You actually want to iterate over writing all depth coordinates and read from the input color image at the corresponding color coordinates.

Chris Culter
  • 4,470
  • 2
  • 15
  • 30
0

I think that in your code there are different not correct lines.

First of all, which kind of depth map are you passing to your function?

Depth data is storred using two bytes for each value, that means that the correct type of the pointer that you should use for your depth data is unsigned short.

Second point is that from what i have understood, you want to map depth frame to color frame, so the correct function that you have to call from kinect sdk is MapDepthFrameToColorFrame instead of MapColorFrameToDepthFrame.

Finally the function will return a map of point where for each depth data at position [i], you have the position x and position y where that point should be mapped.
To do this you don't need for colorData pointer.

So your function should be modified as follow:

/** Method used to build a depth map aligned to color frame
    @param [in]  depthData    : pointer to your data;
    @param [out] mappedData   : pointer to your aligned depth map;
    @return true if is all ok : false whene something wrong
*/

bool DeviceManager::mapColorFrameToDepthFrame(unsigned short *depthData,  unsigned short* mappedData){
    INuiCoordinateMapper* coordMapper;
    NUI_COLOR_IMAGE_POINT* colorPoints = new NUI_COLOR_IMAGE_POINT[640 * 480]; //color points
    NUI_DEPTH_IMAGE_PIXEL* depthPoints = new NUI_DEPTH_IMAGE_PIXEL[640 * 480]; // depth pixel

    /** BE sURE THAT YOU ARE WORKING WITH THE RIGHT HEIGHT AND WIDTH*/  
    unsigned long refWidth = 0;
    unsigned long refHeight = 0;
    NuiImageResolutionToSize( NUI_IMAGE_RESOLUTION_640x480, refWidth, refHeight );
    int width  = static_cast<int>( refWidth  ); //get the image width in a right way
    int height = static_cast<int>( refHeight ); //get the image height in a right way

    m_pSensor>NuiGetCoordinateMapper(&coordMapper); // get the coord mapper
    //Map your frame;
    HRESULT result = coordMapper->MapDepthFrameToColorFrame( NUI_IMAGE_RESOLUTION_640x480, width * height, depthPoints, NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, width * height, colorPoints );
    if (FAILED(result))
       return false;

    // apply map in terms of x and y (image coordinates);
    for (int i = 0; i < width * height; i++)
       if (colorPoints[i].x >0 && colorPoints[i].x < width && colorPoints[i].y>0 &&    colorPoints[i].y < height)
            *(mappedData + colorPoints[i].x + colorPoints[i].y*width) = *(depthData + i );

    // free your memory!!!
    delete colorPoints;
    delete depthPoints;
    return true;
}



Make sure that your mappedData has been initialized in correct way, for example as follow.

mappedData = (USHORT*)calloc(width*height, sizeof(ushort));


Remember that kinect sdk does not provide an accurate align function between color and depth data.

If you want an accurate alignment between two images you should use a calibration model. In that case i suggest you to use the Kinect Calibration Toolbox, based on Heikkilä calibration model.

You can find it in the follow link:
http://www.ee.oulu.fi/~dherrera/kinect/.

dinoiama
  • 23
  • 2
  • 9
-1

First of all, you must calibrate your device. That means, you should calibrate the RGB and the IR sensor and then find the transformation between RGB and IR. Once you know this information, you can apply the function:

RGBPoint = RotationMatrix * DepthPoint + TranslationVector

Check OpenCV or ROS projects for further details on it.

Extrinsic Calibration

Intrinsic Calibration

madduci
  • 2,635
  • 1
  • 32
  • 51
  • where can i find the information regarding the transformation? – ThunderWiring Sep 29 '16 at 09:33
  • @ThunderWiring here you find more details: http://wiki.ros.org/openni_launch/Tutorials/IntrinsicCalibration and here: http://wiki.ros.org/openni_launch/Tutorials/ExtrinsicCalibration – madduci Sep 29 '16 at 12:53