3

I am using the PCl library to compress lidar data. This data is then sent via the ROS network with a custom message. For this, I have a compression node and a decompression node. If I run the compression node, everything works as expected. The decompression node gives however a std::bad_alloc when the decodePointCloud() function is called.

To debug it, I copied the code of the decompression to the compression program. When I run the compression program now, it works. The lidar data is first compressed and in the same file decompressed. I validated this with the visualization software rviz.

Why does the decompression works when I do it all at the same node, but does not work when the code is in a separate node? I first thought because of a lack of memory, but it works when I decompress the data in the same node. This should take approximately the same amount of memory I think.

I run the both programs on a VM with Ubuntu 18.04, 5gb of ram, and 4 processors.

Code for compression:

#include <stdint.h>
#include <ros/ros.h>
// PCL specific includes
#include <sensor_msgs/PointCloud2.h>
#include <pcl_conversions/pcl_conversions.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>


#include <inttypes.h>
#include <pcl/compression/octree_pointcloud_compression.h>
#include <stdio.h>
#include <sstream>
#include <stdlib.h>
#include <chrono>
#include <string>
#include <std_msgs/String.h>
#include <bitset>
#include <std_msgs/UInt8MultiArray.h>
#include <stdint.h>
#include <iostream>
#include <vector>
#include <iterator>

//blob message
#include "my_pcl_tutorial/Blob.h"

ros::Publisher pub;

pcl::io::OctreePointCloudCompression<pcl::PointXYZRGBA>* PointCloudEncoder;



void 
cloud_cb (const sensor_msgs::PointCloud2ConstPtr& input)
{
  // Create a container for the data.
  std::string output;
  std_msgs::String outputString;

  //convert to pointxyzrgba type
  pcl::PCLPointCloud2 pcl_pc2;
  pcl::PointCloud<pcl::PointXYZRGBA>::Ptr temp_cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);//convert to pointer to satisfy function requirement
  pcl_conversions::toPCL(*input, pcl_pc2);
  pcl::fromPCLPointCloud2(pcl_pc2, *temp_cloud); 

  // stringstream to store compressed point cloud
  std::stringstream compressedData; 

  // compress point cloud
  PointCloudEncoder->encodePointCloud (temp_cloud, compressedData);


  compressedData.seekg(0,ios::end);
  int size = compressedData.tellg();
  compressedData.seekg(0,ios::beg);
  std::cout<<size<<std::endl;


  char * buffer = new char[size];

  compressedData.read(buffer,size);

  uint8_t * tempor = reinterpret_cast<uint8_t *>(buffer);
  std::vector<unsigned char> v(tempor, tempor + size);
  std::cout<<"Size: "<<v.size()<<std::endl;


  my_pcl_tutorial::Blob blobmsg;
  blobmsg.data=v;
  blobmsg.size=size;

  sensor_msgs::PointCloud2 pointcloudheader;
  pointcloudheader = *input;
  blobmsg.header = pointcloudheader.header;


//////////////////////////////////////////////
/*
  //converting back, used for testing
  std::stringstream test25;

  std::vector<unsigned char> v2(blobmsg.data);
std::cout<<"Size2: "<<v2.size()<<std::endl;
std::cout<<"Size of message: "<<blobmsg.size<<std::endl;

  std::copy(v2.begin(), v2.end(), std::ostream_iterator<unsigned char>(test25));
std::cout<<"debug"<<std::endl;

  test25.seekg(0,ios::end);
  int size2 = test25.tellg();
  test25.seekg(0,ios::beg);
  std::cout<<size2<<std::endl;

  pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloudOut (new pcl::PointCloud<pcl::PointXYZRGBA> ());
  //decompress
  PointCloudDecoder->decodePointCloud (test25, cloudOut);




  sensor_msgs::PointCloud2 outputmessage;
  std::cout<<"Convert to right format now .."<<std::endl;
  //save header info since toROSmsg throws away this information
  //convert to pointcloud2
  pcl::toROSMsg(*cloudOut, outputmessage);
  outputmessage.header = blobmsg.header;
  std::cout<<outputmessage.header<<std::endl;

/**/

/////////////////////////////


  // Publish the data.
  pub.publish (blobmsg);
}

int
main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "my_pcl_tutorial");
  ros::NodeHandle nh;


  //\\
  //setup compression algorithm


  bool showStatistics = false;

  pcl::io::compression_Profiles_e compressionProfile = pcl::io::LOW_RES_ONLINE_COMPRESSION_WITHOUT_COLOR;

  // instantiate point cloud compression for encoding and decoding
  PointCloudEncoder = new pcl::io::OctreePointCloudCompression<pcl::PointXYZRGBA> (compressionProfile, showStatistics);
  //\\



  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("velodyne_points", 1, cloud_cb);

  // Create a ROS publisher for the output point cloud
  pub = nh.advertise<my_pcl_tutorial::Blob> ("velodyne_points/compressed", 1);

  // Spin
  ros::spin ();

  delete(PointCloudEncoder);
}


Code for decompression:

#include <stdint.h>
#include <ros/ros.h>
// PCL specific includes
#include <sensor_msgs/PointCloud2.h>
#include <pcl_conversions/pcl_conversions.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>

#include <inttypes.h>
#include <pcl/compression/octree_pointcloud_compression.h>
#include <stdio.h>
#include <sstream>
#include <stdlib.h>
#include <chrono>
#include <string>
#include <std_msgs/String.h>
#include <iostream>
#include <cstring>


#include <bitset>
#include <std_msgs/UInt8MultiArray.h>
#include <stdint.h>
#include <iostream>
#include <vector>
#include <iterator>

//blob message
//#include "decompression/Blob.h"
#include <my_pcl_tutorial/Blob.h>

ros::Publisher pub;


pcl::io::OctreePointCloudCompression<pcl::PointXYZRGBA>* PointCloudDecoder;




void 
cloud_cb (const my_pcl_tutorial::Blob& input)
{


  //converting back, used for testing
  std::stringstream test25;

  std::vector<unsigned char> v2(input.data);
std::cout<<"Size2: "<<v2.size()<<std::endl;
std::cout<<"Size of message: "<<input.size<<std::endl;

  std::copy(v2.begin(), v2.end(), std::ostream_iterator<unsigned char>(test25));
std::cout<<"debug"<<std::endl;

  test25.seekg(0,ios::end);
  int size2 = test25.tellg();
  test25.seekg(0,ios::beg);
  std::cout<<size2<<std::endl;

  pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloudOut (new pcl::PointCloud<pcl::PointXYZRGBA> ());
  //decompress
  PointCloudDecoder->decodePointCloud (test25, cloudOut);




  sensor_msgs::PointCloud2 outputmessage;
  std::cout<<"Convert to right format now .."<<std::endl;
  //save header info since toROSmsg throws away this information
  //convert to pointcloud2
  pcl::toROSMsg(*cloudOut, outputmessage);
  outputmessage.header = input.header;
  std::cout<<outputmessage.header<<std::endl;

  pub.publish(outputmessage);
}

int
main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "decompressLidar");
  ros::NodeHandle nh;

  //\\
  //setup compression algorithm

  PointCloudDecoder = new pcl::io::OctreePointCloudCompression<pcl::PointXYZRGBA> ();
  //\\



  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("velodyne_points/compressed", 1, cloud_cb);

  // Create a ROS publisher for the output point cloud
  pub = nh.advertise<sensor_msgs::PointCloud2> ("decompress", 1);

  // Spin
  ros::spin ();

  //delete(PointCloudDecoder);
}
user71649
  • 31
  • 1

1 Answers1

0

It looks like you're using/calling the pcl functions correctly. I have my suspicions. The only (unless I missed something) function that decodePointCloud calls that could throw bad alloc is vector::reserve. So, I would suspect you're running out of memory. For non-ros allocation/usage, see other SO posts (ex). As for the same-node vs different nodes, this is the precise difference between a node and a nodelet:

Nodelets are designed to provide a way to run multiple algorithms on a single machine, in a single process, without incurring copy costs when passing messages intraprocess. roscpp has optimizations to do zero copy pointer passing between publish and subscribe calls within the same node. To do this nodelets allow dynamic loading of classes into the same node, however they provide simple separate namespaces such that the nodelet acts like a seperate node, despite being in the same process. This has been extended further in that it is dynamically loadable at runtime using pluginlib.

See also this answer.

JWCS
  • 1,120
  • 1
  • 7
  • 17
  • Thank you for your answer. I increased the memory to 6GB but it did not help. Do you have any idea how I can force the nodelet to not "be smart", and let it incur the copy costs? This way I could definitely determine if a lack of memory is the problem. – user71649 Jun 15 '20 at 21:42
  • Right now, you _are not_ using nodelets. ros nodelets are basically a fancy way to have "everything in one node", however, which is what you saw when you ran it in one node - the copying was deliberately optimized away. Running them separate is without opt. The memory that is copied/allocated by ros a priori. It sets expectations on the size of data you're pumping through the connection (see TransportHints, it is configurable). There are ways to see what your memory usage is in your decoder, right before you would call `decodePointCloud`, see [ex](https://stackoverflow.com/a/2513561/6069586). – JWCS Jun 15 '20 at 22:10
  • I managed to solve the problem. It was a matter of the order I started everything up. I used to start the rosbag, then compression node, then decompression node. This is when I got the error. When I first started the compression node, then decompression node, and then the rosbag it all worked. Thank you for your time – user71649 Jun 16 '20 at 17:29