0

I am very new to ROS and am working on building a system from ground-up to understand the concepts better. I am trying to convert a depthmap (received from visionary-t time of flight camera as a sensor_msgs/Image message) into a pointcloud. I am looping through the width and the height of the image (in my case 176x144 px) say (u, v) and the value at (u, v) is Z in meters. I then use the intrinsic camera metrics (c_x, c_y, f_x, f_y) to convert the local (u, v) co-ordinates to global (X, Y) co-ordinates and for this, I make use of the pinhole camera model.

X = (u - c_x) * Z / f_x ; Y = (v - c_y) * Z / f_y

I then save these points into pcl::PointXYZ. My camera is mounted on top and the view is of a table with some objects on it. Although my table is flat, when I convert the depthmaps to pointclouds, I see that in the pointcloud, the table has a convex shape and is not flat.

Can someone please suggest what could be the reason for this convex shape and how do I rectify this?

user82844
  • 1
  • 1
  • Does this answer your question? [Generate point cloud from depth image](https://stackoverflow.com/questions/59590200/generate-point-cloud-from-depth-image) – BTables Jan 30 '22 at 20:11
  • I tried this approach too with the open3d.geometry.PointCloud.create_from_depth_image() function. The result is the same. Although, in the background, open3d does the same conversion using the pinhole camera model. – user82844 Jan 31 '22 at 09:55

1 Answers1

0

There might be something wrong about how you use the intrinsics.

There is a post regarding the "reverse projection of image coordinates": Computing x,y coordinate (3D) from image point Maybe it helps you.

J.P.S.
  • 485
  • 4
  • 11