I am very new to ROS and am working on building a system from ground-up to understand the concepts better. I am trying to convert a depthmap (received from visionary-t time of flight camera as a sensor_msgs/Image message) into a pointcloud. I am looping through the width and the height of the image (in my case 176x144 px) say (u, v) and the value at (u, v) is Z in meters. I then use the intrinsic camera metrics (c_x, c_y, f_x, f_y) to convert the local (u, v) co-ordinates to global (X, Y) co-ordinates and for this, I make use of the pinhole camera model.
X = (u - c_x) * Z / f_x ; Y = (v - c_y) * Z / f_y
I then save these points into pcl::PointXYZ. My camera is mounted on top and the view is of a table with some objects on it. Although my table is flat, when I convert the depthmaps to pointclouds, I see that in the pointcloud, the table has a convex shape and is not flat.
Can someone please suggest what could be the reason for this convex shape and how do I rectify this?