I've seen examples of using EMGUCV on regular RGB images captured from the Kinect like this, but then you might as well have a webcam. I am interested in getting a Point Cloud which I can later use for triangulation.
I've tried 'manually' converting a DepthFrame to a point cloud file. In the depth frame you have X, Y and a depth value which I converted to XYZ points for a .ply file. The results are garbled and useless.
Now, I noticed that EMGUCV has this method which maps a point cloud into an EMGUCV Mat object. I just don't know how the syntax is supposed to be for this as there are no examples of people asking for this or any provided examples by the people behind EMGUCV.
Here's what I tried, the Kinect doesn't even seem to turn on, and success always returns false.
public void test()
{
KinectCapture kc = new KinectCapture(KinectCapture.DeviceType.Kinect, KinectCapture.ImageGeneratorOutputMode.Vga30Hz);
Mat m = new Mat();
bool success = kc.RetrievePointCloudMap(m);
}
I also had a problem that it kept throwing an exception during the construction of the KinectCapture object, this was my solution.