I am working on detecting a rectangle using the depth camera. At this point, I am struggling to produce accurate 3d coordinates of a selected pixel using the NFOV depth camera. I have tested this by selecting 2 points on a testing board, transforming them using Depth 2d to Color 3d Kinect-SDK C# Wrapper function, calculating the distance between received coordinates, and measuring distance between selected real-world points (918mm). At 2.7 meter distance, I am getting 2 cm error at the image center, while in the corners the error reaches up to 6 cm.
Shouldn't the Transformation functions correct for distortion? Am I missing crucial steps for receiving accurate data? Might this be something else?
Thank you for your help!