This page offers a comprehensive exploration of vision-based robotic functionalities, covering various techniques such as line following, color tracking, QR code recognition, and traffic light recognition. Each section provides detailed insights into the underlying code logic and function implementation, enabling users to understand the intricacies of vision-based robotics. With clear instructions and visual aids, users can seamlessly integrate these techniques into robotic systems for diverse applications ranging from autonomous navigation to object detection and identification. Whether it's following lines on a track, tracking colored objects in the environment, recognizing QR codes for information retrieval, or detecting traffic lights for decision-making, this page equips users with the knowledge and tools to harness the power of vision-based robotics effectively.

Table of Contents

Vision-based Line Following

1. Code Logic

  1. Firstly, the camera needs to be initialized. The image information is obtained by subscribing to the messages published by the camera, and the image is converted to the OpenCV
  2. The obtained image is preprocessed, including operations such as grayscale conversion, Gaussian blur, and edge
  3. The preprocessed image is binarized to convert it into a black and white binary
  4. Morphological operations, such as dilation, erosion, and opening, are applied to the binary image to enhance line
  5. Hough transform is used to detect lines, which are then drawn on the
  6. By analyzing the slope and position of the detected lines, the direction in which the robot needs to turn is determined, and the robot is controlled to move towards the target direction.

2. Function Implementation

  1. Launch the camera.

    ros2 launch astra_camera dabai.launch.py
    
  2. Place the car in the sandbox and activate the vision-based line following function.

    ros2 run limo_visions detect_line
    

    Untitled

Color Tracking

Visual color tracking is an object detection and tracking technique based on image processing, which allows real-time tracking and localization of objects of specific colors.

1. Code Logic

  1. Initialize ROS node and camera subscriber: First, you need to initialize a ROS node using the rclcpp library in ROS2, and create a subscriber to subscribe to image messages. Convert the image messages from ROS to OpenCV format using the cv_bridge
  2. Define color range and mask: In this code, we will take the blue color target as an example for First, define a range object in OpenCV to represent the color range. Then, use the in Range function in OpenCV to convert the image to a binary mask, which filters out the target region for further processing.
  3. Detect and draw bounding boxes: The target region in the mask may contain noise and other non-target To identify the exact position of the target region, you can use the findContours function in OpenCV to find the contours and use the boundingRect function to calculate the bounding box of the target region. Then, use the rectangle function to draw the bounding box on the original image.
  4. Publish the target position: Lastly, you can use a publisher in ROS2 to publish the target position to other nodes for further control and

2. Function Implementation