1

I am developing an occupancy networks for road scene. I wonder how to generate an occupancy map for an image captured from Carla.

The idea is to generate a random 3D point (x,y,z) to output whether the point is in an object (car, pedestrian, building, etc.) or not. For each image, we can generate say 100K 3D points with 0 or 1 (0: outside an object, 1: inside an object). Or even better to generate "semantic occupancy map": for each (x,y,z) point the value v: 0: empty space, 1: in the car, 2: in the pedestrian, 3: in the ground, etc.

Please point me to the reference if there is example code available to show how.

Thanks a lot for your help.

Paul Wang
  • 1,666
  • 1
  • 12
  • 19
  • Did you find any material or workaround? – Rafael Toledo Nov 05 '22 at 12:30
  • No. So far the closest "sensor" in carla to mimic occupancy map is the "depth sensor". However, the depth map is not solid but just a surface of an object from one perspective. I wonder anyone fluent in Carla sensor creation, for example, whoever create the depth sensor, should be able to create this new Occupancy Sensor. Any thought? – Paul Wang Nov 05 '22 at 20:12
  • Sorry for the delay. You give me an idea about the depth sensor, this sensor provides the X-values of the XYZ coords ([openCV standard](https://github.com/carla-simulator/carla/blob/master/PythonAPI/examples/lidar_to_camera.py#L189)). Now we would need only the Z-values. I tried the IPM algorithm to get these coordinates, but I faced a strange behavior, [see here](https://github.com/thomasfermi/Algorithms-for-Automated-Driving/issues/29). Getting the X and Z values, we would need only to sample this information into an image grid. Currently, I don't know how to get the correct Z-values. – Rafael Toledo Nov 10 '22 at 13:34

0 Answers0