I have a drone in a Gazebo environment with a RealSense d435 camera on it. My plan is to use YOLO to find the center of an object of interest, and then find the depth of that point from the depth image. I heard that the depth camera outputs an image where the depth values are encoded in the RGB values. When further looking this up online, I found that there is a pyrealsense2 library that has functions for everything I need.
The implementations I've seen online need you to create a pyrealsense.pipeline()
and get your frames from that. The issue is this seems to only work if you have a RealSense camera connected to your computer. Since mine exists in the Gazebo environment, I need a way to get and use the depth frame in a ROS callback. How would I do this? Any pointers would be greatly appreciated