2

I'm working on a research project where I need to estimate in real time the 6DOF pose of objects in a pick and place tasks. The pose must be estimate in real time, and the objects can be one on top of the other and identical, so I have to get the position and the orientation of the object on the top. The problem is that the objects are same ( PPVC blocks, in the construction field) but good thing is that they are with quite regular shape.

So how to compare the known 3D CAD model in a single 2D image of point clouds from RGBD Camera with the object that is on the top, so can get the 6DOF pose in real time. Any approach can be used, but must be very accurate as very high precision is required.

I know that can use Deep Learning to localize the object and then either compare with the 3D CAD model or process the point clouds(segmentation , if need clustering, PCA...) , but what if the Deep Learning failed and not able to detect the object with the RGBD Camera? What options do I have then? Is the surface matching algorithm https://docs.opencv.org/3.0-beta/modules/surface_matching/doc/surface_matching.html in OpenCv an option?

I start with the PCL and comparing the 3D CAD object model with the PCL scene where the actual object is using ICP matching How to use iterative closest point

My sample code is here

void cloud_cb(const sensor_msgs::PointCloud2::ConstPtr &msg){


   // Convert to pcl point cloud
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudIn (new pcl::PointCloud<pcl::PointXYZRGB>);
   pcl::fromROSMsg(*msg,*cloudIn);

   //Deklaration cloud pointers
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudOut (new pcl::PointCloud<pcl::PointXYZRGB>);
   pcl::PointCloud<pcl::PointXYZRGB> finalCloud;
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudOut_new (new pcl::PointCloud<pcl::PointXYZRGB>) ;


   //Load CAD file
   if(pcl::io::loadPLYFile ("/home/admini/darknet_ros/src/darknet_ros/gb_visual_detection_3d/darknet_ros_3d/darknet_ros_3d/src/rs_box.ply", *cloudOut)==-1)
   {
     PCL_ERROR ("Couldn't read second input file! \n");
    // return (-1);
   }


  //std::cout << "\nLoaded file "  << " (" << cloudOut->size () << " points) in " << time.toc () << " ms\n" << std::endl;
  
  pcl::IterativeClosestPoint<pcl::PointXYZRGB, pcl::PointXYZRGB> icp;
  icp.setInputCloud(cloudOut);
  icp.setInputTarget(cloudIn);
  icp.setMaximumIterations (30);// 30
  icp.setTransformationEpsilon (1e-9);
  icp.setMaxCorrespondenceDistance (2);//2
  icp.setEuclideanFitnessEpsilon (0.6);// 0.0001
  icp.setRANSACOutlierRejectionThreshold (80);//20
  //icp.setRANSACIterations(800);//800

  icp.align(finalCloud);

  if (icp.hasConverged())
  {
      std::cout << "ICP converged." << std::endl
                << "The score is " << icp.getFitnessScore() << std::endl;
      std::cout << "Transformation matrix:" << std::endl;
      std::cout << icp.getFinalTransformation() << std::endl;
  }
  else std::cout << "ICP did not converge." << std::endl;

  Eigen::Matrix4f transformationMatrix = icp.getFinalTransformation ();
  std::cout<<"trans %n"<<transformationMatrix<<std::endl;

  //pcl::transformPointCloud( *cloudOut, *cloudOut_new, transformationMatrix);
  pcl::transformPointCloud (finalCloud, *cloudOut_new, transformationMatrix);

   //sensor_msgs::PointCloud2 cloud_icp;
   //pcl::toROSMsg (*cloudOut , cloud_icp);
   //cloudOut ->header.frame_id = "/zed_left_camera_frame";
   //pub.publish (cloud_icp);


   //Covert rotation matrix to quaternion
   //First convert Transforamtion Matrix to Rotation Matrix
    //Eigen::Matrix3f Rotation_matrix = transformationMatrix.topLeftCorner<3,3>();
    /*Eigen::Matrix3d rotation_matrix;
    rotation_matrix (0, 0) = transformationMatrix (0,0);
    rotation_matrix (0, 1) = transformationMatrix (0,1);
    rotation_matrix (0, 2) = transformationMatrix (0,2);
    rotation_matrix (1, 0) = transformationMatrix (1,0);
    rotation_matrix (1, 1) = transformationMatrix (1,1);
    rotation_matrix (1, 2) = transformationMatrix (1,2);
    rotation_matrix (2, 0) = transformationMatrix (2,0);
    rotation_matrix (2, 1) = transformationMatrix (2,1);
    rotation_matrix (2, 2) = transformationMatrix (2,2);*/

    //Convert Rotation Matrix to Quaternion
    //Quaternionf q(Rotation_matrix);
    Eigen::Matrix3f rot_matrix = transformationMatrix.topLeftCorner<3,3>();
    //Eigen::Quaternionf q;
    Eigen::Quaternionf q(rot_matrix);
    q.normalized();
    //Eigen::Quaternionf q.normalized(transformationMatrix.topLeftCorner<3,3>());

    cout << " postion x  = " << transformationMatrix(0,3) <<  endl;
    cout << " postion y  = " << transformationMatrix(1,3) <<  endl;
    cout << " postion z  = " << transformationMatrix(2,3) <<  endl;
    
   //Convert Quaternion to Euler angle
    double roll, pitch, yaw;
    double PI=3.1415926535;
    auto euler = q.toRotationMatrix().eulerAngles(0,1,2);
    std::cout << "Euler from quaternion in roll, pitch, yaw (degree)"<< std::endl << euler*180/PI << std::endl;

  


   //Visualisation Marker
    std::string ns; 
    float r; 
    float g; 
    float b;
    visualization_msgs::MarkerArray msg_marker;
    visualization_msgs::Marker bbx_marker;
    bbx_marker.header.frame_id = "zed_left_camera_frame";
    bbx_marker.header.stamp = ros::Time::now();
    bbx_marker.ns = ns;
    bbx_marker.type = visualization_msgs::Marker::CUBE;
    bbx_marker.action = visualization_msgs::Marker::ADD;
    bbx_marker.pose.position.x =  transformationMatrix(0,3);
    bbx_marker.pose.position.y =  transformationMatrix(1,3);;
    bbx_marker.pose.position.z =  transformationMatrix(2,3);;
    bbx_marker.pose.orientation.x = (90/PI)*q.x();///2
    bbx_marker.pose.orientation.y = -(360/PI)*q.y();//-2*
    bbx_marker.pose.orientation.z = -(360/PI)*q.z();//-2*
    bbx_marker.pose.orientation.w = q.w();
    bbx_marker.scale.x = 0.09;//0.13
    bbx_marker.scale.y = 0.22;//0.34
    bbx_marker.scale.z = 0.21;//0.25
    bbx_marker.color.b = 0.3*b;
    bbx_marker.color.g = 0.1*g;
    bbx_marker.color.r = 0.1*r;
    bbx_marker.color.a = 0.8;
    bbx_marker.lifetime = ros::Duration();
    msg_marker.markers.push_back(bbx_marker);
    markers_pub_.publish(msg_marker);


}

int
main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "my_pcl_tutorial");
  ros::NodeHandle nh;
  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("/zed/zed_node/point_cloud/cloud_registered", 200, cloud_cb);
  // Create a ROS publisher for the output point cloud
  pub = nh.advertise<sensor_msgs::PointCloud2> ("cloud_icp", 100);
  markers_pub_ = nh.advertise<visualization_msgs::MarkerArray> ("msg_marker", 200);
  ros::spin();

}

But there are two points in the code and algo that need further improvements. Firstly, can be more accurate. And secondly, the update rate needs to be increased, let's say the target is around 25-30fps. Any help on how to achieve these two further improvement points?

Thanks

Macedon971
  • 303
  • 2
  • 11

1 Answers1

0

Essentially, there are two main steps:

  1. Segmentation: extract a list of objects from the scene
  2. Recognition: match the extracted objects against known objects from the database

There are several methods for both of these steps already in the PCL library.

Have a look at this tutorial for a starter, where both steps are included.

https://pcl.readthedocs.io/projects/tutorials/en/latest/correspondence_grouping.html#correspondence-grouping

serkan.tuerker
  • 1,681
  • 10
  • 20
  • Hi, I Know PCL . I have done ICP matching, comparing the 3D CAD model of the objects (in ply format) and the Point cloud scene with the objects using this tutorial [https://pcl.readthedocs.io/projects/tutorials/en/latest/iterative_closest_point.html#iterative-closest-point]. It performed ok, but it's very slow and the accuracy can be further improved. Any help on how to increase the update rate and accuracy? What is the main difference between correspondance grouping and ICP methods? – Macedon971 Jul 09 '20 at 06:46
  • I update the question with the ICP code sample. Please can help now? – Macedon971 Jul 09 '20 at 06:58
  • ICP is used for fine alignment after course alignment with for example what I have shown in my post. – serkan.tuerker Jul 09 '20 at 09:00
  • Means, first need to do course alignment with segmentation, down sampling, computing normals, recognition, and then at the end ICP? Are all these processing not gonna decrease the update rate and make the algorithm even more slower or If I do perform good downsampling and segmentation will be actually faster as I have to process less point clouds insteat the entiry scene? – Macedon971 Jul 09 '20 at 09:40
  • Ok. I tried the code and never can get working. Even with tutorials model pcd and from other pcl lirbrary models always got this error. `Model total points: 12575; Selected Keypoints: 0 Scene total points: 307200; Selected Keypoints: 2215 [pcl::SHOTEstimation::compute] input_ is empty! [pcl::SHOTEstimation::initCompute] Init failed. [pcl::SHOTEstimation::initCompute] Init failed. [pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!` – Macedon971 Jul 14 '20 at 06:01
  • My Model Selected Keypoints are always 0. What can be wrong? – Macedon971 Jul 14 '20 at 06:03