Welcome to the step-by-step guide on integrating a Depth Camera sensor into your robot's URDF model. This comprehensive tutorial will walk you through the process of adding an Intel RealSense D35 camera to your robot URDF. You'll learn how to create the necessary links, visuals, and collisions for the camera, and even have access to the visual mesh file for the RealSense camera. Additionally, we'll show you how to set up the camera frame in the correct orientation using an optical link, and initialize the Depth Camera as a Gazebo reference to publish information about the environment. By the end of this guide, your robot will be equipped with a fully functional Depth Camera, enabling it to perceive its surroundings in 3D and empowering it for various robotic applications. Let's dive in and enhance your robot's perception capabilities!
To add a Depth Camera sensor to your robot URDF, you must first add a link representing the sensor. The following URDF code describes an Intel RealSense D35 camera that can be added to your robot URDF.
<link name="depth_link">
<visual>
<origin xyz="0 0 0" rpy="0 0 ${pi/2}" />
<geometry>
<mesh
filename="package://hunter_se_description/meshes/IntelRealsense.dae"
scale="0.1 0.1 0.1" />
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 ${pi/2}" />
<geometry>
<mesh
filename="package://hunter_se_description/models/deliverybot/meshes/IntelRealsense.dae"
scale="0.1 0.1 0.1" />
</geometry>
</collision>
<inertial>
<origin xyz="0 0 0" rpy="0 0 ${pi/2}" />
<mass value="0.1" />
<inertia ixx="0.001" ixy="0" ixz="0" iyy="0.005" iyz="0" izz="0.005" />
</inertial>
</link>
The visual mesh for the Intel RealSense camera can be downloaded here:
<aside>
💡 Ensure you add the IntelRealsense.dae
file to your meshes folder before attempting to launch the robot.
</aside>
For cameras, both visual and depth, you must create an optical
link that represents the appropriate frame of the camera image. The default RVIZ
and TF
frame has Z facing upwards, while camera links have Z
facing forward. Create this rotated depth frame using the link definition below:
<link name="depth_link_optical" />
<joint name="$depth_joint_optical" type="fixed">
<origin xyz="0 0 0" rpy="${-pi/2} 0 ${-pi/2}" />
<parent link="depth_link" />
<child link="depth_link_optical" />
</joint>
Now that the visual component of the link has been created, a gazebo
reference to that link has to be set such that the sensor will publish information about the gazebo
environment.
Within your robot urdf
add a gazebo reference
to initialize the depth camera. Add the following code to your robot sensors urdf
:
<gazebo reference="depth_link">
<sensor type="depth" name="depth_camera">
<visualize>false</visualize>
<update_rate>10.0</update_rate>
<always_on>true</always_on>
<pose>0 0 0 0 0 0</pose>
<camera name="${name}_depth_camera">
<horizontal_fov>1.487021</horizontal_fov>

<clip>
<near>0.5</near>
<far>8</far>
</clip>
</camera>
<plugin name="depth_controller" filename="libgazebo_ros_camera.so">
<ros>
<namespace>d430</namespace>
<remapping>depth/image_raw:=color/image_raw</remapping>
<remapping>depth/depth/image:=depth/image_rect_raw</remapping>
<remapping>depth/camera_info:=camera_info</remapping>
<remapping>depth/camera_info:=depth/camera_info</remapping>
<remapping>depth/points:=depth/points</remapping>
</ros>
<frame_name>depth_link_optical</frame_name>
<cameraName>depth_camera</cameraName>
<hack_baseline>0.07</hack_baseline>
<min_depth>0.05</min_depth>
<max_depth>8.0</max_depth>
</plugin>
</sensor>
</gazebo>
Lets look at what that gazebo
reference block means. Firstly, we must initialize which robot link is going to be used as the sensor. That is done using the sensor
tag. The name of the sensor, as well as the sensor type are given within this tag. Some initializing parameters are also set regarding visualizing, update rate, and initial position.
<sensor type="depth" name="depth_camera">
<visualize>false</visualize>
<update_rate>10.0</update_rate>
<always_on>true</always_on>
<pose>0 0 0 0 0 0</pose>
Next, the sensor configurations are set so that the LIDAR ray sensor behaves like the physical unit. The resolution, range, samples, and sensor noise are all set within the camera
tags.
<camera name="depth_camera">
<horizontal_fov>1.487021</horizontal_fov>

<clip>
<near>0.5</near>
<far>8</far>
</clip>
</camera>
Next, the gazebo
plugin used to generate the LIDAR topics is called. These are called within plugin
tags (within the sensor
tag). ROS parameters such as topic remapping, output types, and the frame name are all set within this plugin
tag.
<plugin name="depth_controller" filename="libgazebo_ros_camera.so">
<ros>
<namespace>d430</namespace>
<remapping>depth/image_raw:=color/image_raw</remapping>
<remapping>depth/depth/image:=depth/image_rect_raw</remapping>
<remapping>depth/camera_info:=camera_info</remapping>
<remapping>depth/camera_info:=depth/camera_info</remapping>
<remapping>depth/points:=depth/points</remapping>
</ros>
<frame_name>depth_link_optical</frame_name>
<cameraName>depth_camera</cameraName>
<hack_baseline>0.07</hack_baseline>
<min_depth>0.05</min_depth>
<max_depth>8.0</max_depth>
</plugin>
</sensor>
</gazebo>
Now your depth camera will publish to the /depth/image_raw
, /depth/depth/image
, /depth/camera_info
, and /depth/points
topics.