I would like to map the 3D skeleton in CameraSpace back to 2D color image in Kinect without using Coordinate mapper. Do I need to transform from Depth Camera to Color Camera?
In my opinion, The skeleton joints are gotten from Depth image. Thus, In order to get the joints position in Color image I should go through these 3 steps:
1) 3D joints is on the depth camera 3D space, I need to transform to color camera 3D space (rotation/translation). I dont know how to get this transformation matrix!
2) Find the Color camera intrinsic parameter(using matlab calibration toolbox) to map 3D back to 2D
3) Multiply with Distortion Co-eficients.
I found one similar question in here: How to convert points in depth space to color space in Kinect without using Kinect SDK functions? However, The question about How to find out the transformation matrix to map Depth-camera to Color-camera is not answered.
Edit: After implementation, I think Color 2D, and depth 2D, share the same 3D Camera-space (in some ways, actually depth and color are two different camera --> they should have different 3D camera-space). Thus, I successfully maped 3D points into 2D color without coordinate mapping function (I used the projection matrix which found out from Matlab toolbox). 3D cameraspace -> project back 2D pixel color
At the beginning, I think 3D points are in 3D depth space, 3D depth and 3D color space are different. (input: 3D points in 3D depth space, output: 2D color pixel), I need to transform 3D depthcamera ->3D colorcamera -> project back to 2D pixel color ). However, the step 3D depthcamera ->3D colorcamera did not need to implement.