3

I have a drone in a Gazebo environment with a RealSense d435 camera on it. My plan is to use YOLO to find the center of an object of interest, and then find the depth of that point from the depth image. I heard that the depth camera outputs an image where the depth values are encoded in the RGB values. When further looking this up online, I found that there is a pyrealsense2 library that has functions for everything I need.

The implementations I've seen online need you to create a pyrealsense.pipeline() and get your frames from that. The issue is this seems to only work if you have a RealSense camera connected to your computer. Since mine exists in the Gazebo environment, I need a way to get and use the depth frame in a ROS callback. How would I do this? Any pointers would be greatly appreciated

GPrathap
  • 7,336
  • 7
  • 65
  • 83
Kadhir
  • 143
  • 1
  • 2
  • 12

2 Answers2

1

yeah, you can do this with help of a ROS subscriber as follows (most of the code was taken from here):

import rospy
from sensor_msgs.msg import Image as msg_Image
from cv_bridge import CvBridge, CvBridgeError
import sys
import os

class ImageListener:
    def __init__(self, topic):
        self.topic = topic
        self.bridge = CvBridge()
        self.sub = rospy.Subscriber(topic, msg_Image, self.imageDepthCallback)

    def imageDepthCallback(self, data):
        try:
            cv_image = self.bridge.imgmsg_to_cv2(data, data.encoding)
            pix = (data.width/2, data.height/2)
            sys.stdout.write('%s: Depth at center(%d, %d): %f(mm)\r' % (self.topic, pix[0], pix[1], cv_image[pix[1], pix[0]]))
            sys.stdout.flush()
        except CvBridgeError as e:
            print(e)
            return


if __name__ == '__main__':
    rospy.init_node("depth_image_processor")
    topic = '/camera/depth/image_rect_raw'  # check the depth image topic in your Gazebo environmemt and replace this with your
    listener = ImageListener(topic)
    rospy.spin()

Note: To install CvBridge, you may follow the instructions below:

 sudo apt-get install ros-(ROS version name)-cv-bridge

 sudo apt-get install ros-(ROS version name)-vision-opencv

More information: http://wiki.ros.org/cv_bridge

GPrathap
  • 7,336
  • 7
  • 65
  • 83
  • I just found that myself. Thanks a lot for the reply. Most of the things I saw online told me to use the `get_distance` function from the pyrealsense2 library, so I thought it was a lot more complicated than it ended up being – Kadhir Jul 20 '20 at 17:23
0

To complement the above answer, be careful to the depth/image_rect_raw topic. Very likely you actually want the raw image, which you can obtain from /camera/aligned_depth_to_color/image_raw.

This topic may not be active by default, for instance on d435i you should toggle the align_depth:=true option:

roslaunch realsense2_camera rs_camera.launch align_depth:=true

See https://github.com/IntelRealSense/realsense-ros#usage-instructions for more info.

Rexcirus
  • 2,459
  • 3
  • 22
  • 42