ORBBEC®Dabai is a depth camera based on binocular structured light 3D imaging technology. It mainly includes a left infrared camera (IR camera1), a right infrared camera (IR camera2), an IR projector, and a depth processor. The IR projector is used to project the structured light pattern (speckle pattern) to the goal scene, the left infrared camera and the right infrared camera respectively collect the left infrared structured light image and the right infrared structured light image of the goal, and the depth processor executes the depth calculation algorithm and outputs the depth image of the goal scene after receiving the left infrared structured light image and the right infrared structured light image.

Table of Contents

Parameter name Parameter index
The distance between the imaging centers of the left and right infrared cameras 40mm
Depth distance 0.3-3m
Power consumption The average power consumption of the whole machine is <2W,The peak value at the moment the laser is turned on <5W(duration: 3ms),Typical standby power consumption <0.7W
Depth map resolution 640400@30FPS320200@30FPS
Color map resolution 1920X1080@30FPS1280X720@30FPS640X480@30FPS
Accuracy 6mm@1m(81% FOV area participates in accuracy calculation*)
Depth FOV H 67.9° V 45.3°
Color FOV H 71° V43.7° @1920X1080
Delay 30-45ms
Data transmission USB2.0 or above
Supported operating system Android / Linux / Windows7/10
Power supply mode USB
Operating temperature 10°C ~ 40°C
Applicable scene Indoor / outdoor (specifically subject to application scenes and related algorithm requirements)
Dustproof and waterproof Foundation dustproof
Safety Class1 laser
Dimensions (mm) Length 59.6 X width 17.4 X thickness 11.1mm

After knowing the basic parameters of ORBBEC®Dabai, start to practice:

<aside> 💡 Note: Before running the command, please make sure that the programs in other terminals have been terminated. The termination command is: Ctrl+c

</aside>

  1. First ,start the ORBBEC®Dabai camera and run the following command:

    roslaunch astra_camera dabai_u3.launch
    
    
  2. If startup fails, try the following command:

    roslaunch astra_camera dabai_dc1.launch
    
    
  3. The following warnings will appear during running. This is because some parameters in the driver are not supported by the camera and can be ignored.

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/dabai.png

View depth camera information

After successfully opening the depth camera, launch rviz to view the images captured by the depth camera and the depth information collected.

  1. Open a new terminal and enter the command:

    rviz
    
  2. Then add the Image component to see the picture captured by the camera. The steps are as follows.

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_1.png

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_2.png

  3. Select camera_link in fixed frame.

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_3.png

  4. Fill in the corresponding topic in the image component to get the rgb picture.

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_4.png

  5. After completing the above operations, you can see the picture captured by the camera in the Image window.

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_5.png

  6. Click add and add the DepthCloud component to view point cloud data

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_6.png

  7. Select camera_link in fixed frame and select the corresponding topic in DepthCloud component.

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_7.png

  8. Show depth map:

    https://github.com/agilexrobotics/limo_pro_doc/raw/master/LIMO_image/rviz_8.png

Introduction of rtabmap algorithm

The rtabmap algorithm provides an appearance-based positioning and mapping solution independent of time and scale. It's aimed at solving the problem of online closed-loop detection in large-scale environments. The idea is to solve some real-time limitation problems. Closed-loop detection uses only a limited number of positioning points while being able to access the positioning points of the entire map when needed.

1. Rtabmap algorithm mapping