2

I'm creating a vision algorithm that is implemented in a Simulink S-function( which is c++ code). I accomplished every thing wanted except the alignment of the color and depth image.

My question is how can i make the 2 images correspond to each other. in other words how can i make a 3d image with opencv.

I know my question might be a little vague so i will include my code which will explain the question

#include "opencv2/opencv.hpp"

using namespace cv;

int main(int argc, char** argv)
{
// reading in the color and depth image 
Mat color = imread("whitepaint_col.PNG", CV_LOAD_IMAGE_UNCHANGED);
Mat depth = imread("whitepaint_dep.PNG", CV_LOAD_IMAGE_UNCHANGED);

// show bouth the color and depth image
namedWindow("color", CV_WINDOW_AUTOSIZE);
imshow("color", color);
namedWindow("depth", CV_WINDOW_AUTOSIZE);
imshow("depth", depth);

// thershold the color image for the color white
Mat onlywhite;
inRange(color, Scalar(200, 200, 200), Scalar(255, 255, 255), onlywhite);

//display the mask
namedWindow("onlywhite", CV_WINDOW_AUTOSIZE);
imshow("onlywhite", onlywhite);

// apply the mask to the depth image
Mat nocalibration;
depth.copyTo(nocalibration, onlywhite);

//show the result
namedWindow("nocalibration", CV_WINDOW_AUTOSIZE);
imshow("nocalibration", nocalibration);


waitKey(0);
destroyAllWindows;
return 0;
}

output of the program:

enter image description here

As can be seen in the output of my program when i apply the onlywhite mask to the depth image the quad copter body does not consist out of 1 color. The reason for this is that there is a miss match between the 2 images.

I know that i need calibration parameters of my camera and i got these from the last person who worked with this setup. Did the calibration in Matlab and this resulted in the following.

Matlab calibration esults:

https://i.stack.imgur.com/JwFi5.png

I have spent allot of time reading the following opencv page about Camera Calibration and 3D Reconstruction ( cannot include the link because of stack exchange lvl)

But i cannot for the life of me figure out how i could accomplish my goal of adding the correct depth value to each colored pixel.

I tried using reprojectImageTo3D() but i cannot figure out the Q matrix. i also tried allot of other functions from that page but i cannot seem to get my inputs correct.

Dima
  • 38,860
  • 14
  • 75
  • 115
W.laarakkers
  • 21
  • 1
  • 4
  • If you are writing your own Simulink block, you can try doing it in MATLAB using the MATLAB Function Block. – Dima Nov 27 '15 at 01:51

2 Answers2

1

As far as I know, Matlab has very good support for Kinect (especially for v1). You may use a function named alignColorToDepth, as follows:

[alignedFlippedImage,flippedDepthImage] = alignColorToDepth(depthImage,colorImage,depthDevice)

The returned values are alignedFlippedImage (the RGB registrated image) and flippedDepthImage (the registrated depth image). These two images are aligned and ready for you to process them.

You can find more at this MathWorks documentation page.

Hope it's what you need :)

rhcpfan
  • 557
  • 7
  • 19
0

As far as I can tell, you are missing the transformation between camera coordinate frames. The Kinect (v1 and v2) uses two separate camera systems to capture the depth and RGB data, and so there is a translation and rotation between them. You may be able to assume no rotation, but you will have to account for the translation to fix the misalignment you are seeing.

Try starting with this thread.

Community
  • 1
  • 1
Brian Lynch
  • 542
  • 5
  • 13
  • Yes i am aware of the missing translation That is why i asked the question. The tread you linked suggests using the kinect api. This is something i would rather not do since it is not capable to work whit Mat files as far as i am aware. Also since my input are Mat files i would not be able to use the example code from that tread. – W.laarakkers Oct 16 '15 at 11:16
  • Are you using the Kinect v2? If so, you can probably use the [stereoCalibrate](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findchessboardcorners) function in OpenCV since a checkerboard would show up in RGB and IR. If you are using the Kinect v1 then you might need to set up a target that you can pinpoint in both frames to compute the transformation. Alternatively, you could just manually adjust the offset until it looks satisfactory. – Brian Lynch Oct 16 '15 at 11:55
  • I'm using the Kinect V1. Also manually adjusting the offset is not a option. The reason is that the offset depends on the height. Since i have the depth image i would expect that there would be a way to calculate a transformation dependent on the depth value where it maps it to the correct pixel in the rgb image. – W.laarakkers Oct 16 '15 at 12:09
  • A checkerboard would be visible in the kinect v1 IR frame as well, but it would be distorted by the projected pattern. This can be fixed by covering the IR projector when capturing the images. It requires very good ambient lighting though. – Hannes Ovrén Oct 16 '15 at 14:46