1

I have collected data using Kinect v2 sensor and I have a depth map together with its corresponding RGB image. I also calibrated the sensor and obtained the rotation and translation matrix between the Depth camera and RGB camera.

So I was able to reproject the depth values on the RGB image and they match. However, since the RGB image and the depth image are of different resolutions, there are a lot of holes in the resulting image.

So I am trying to move the other way, i.e. mapping the color onto the depth instead of depth to color.

So the first problem I am having is that the RGB image has 3 layers and I have to convert the RGB image to grayscale to do it and I am not getting the correct results.

Can this be done?

Has anyone tried this before?

Dima
  • 38,860
  • 14
  • 75
  • 115
Ali P
  • 519
  • 6
  • 21

2 Answers2

0

Why can't you fit the Z-depth to the RGB?

To fit the low res image to the high- res should be easy, as long as both represent the same size of data (i.e. corners of both images are the same point)

It should be as easy as:

Z_interp=imresize(Zimg, [size(RGB,1) size(RGB,2)])

Now Z_interp should have the same amount of pixels as RGB


If you still want to do it the other way around, well, use the same approach:

RGB_interp=imresize(RGB, [size(Zimg,1) size(Zimg,2)])
Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
-1

The Image Acquisition Toolbox now officially supports Kinect v2 for Windows. You can get a point cloud out from Kinect using pcfromkinect function in the Computer Vision System Toolbox.

Dima
  • 38,860
  • 14
  • 75
  • 115