1. Vision-based Line Following

1.(a) Code Logic

  1. Firstly, the camera needs to be initialized. The image information is obtained by subscribing to the messages published by the camera, and the image is converted to the OpenCV format.
  2. The obtained image is preprocessed, including operations such as grayscale conversion, Gaussian blur, and edge detection.
  3. The preprocessed image is binarized to convert it into a black and white binary image.
  4. Morphological operations, such as dilation, erosion, and opening, are applied to the binary image to enhance line detection.
  5. Hough transform is used to detect lines, which are then drawn on the image.
  6. By analyzing the slope and position of the detected lines, the direction in which the robot needs to turn is determined, and the robot is controlled to move towards the target direction.

1.(b) Function Implementation

  1. Launch the camera.

    ros2 launch astra_camera dabai.launch.py
    

  1. Place the car in the sandbox and activate the vision-based line following function.

    ros2 run limo_visions detect_line
    


6.2 Color Tracking

Visual color tracking is an object detection and tracking technique based on image processing, which allows real-time tracking and localization of objects of specific colors.

Code Logic

  1. Initialize ROS node and camera subscriber: First, you need to initialize a ROS node using the rclcpp library in ROS2, and create a subscriber to subscribe to image messages. Convert the image messages from ROS to OpenCV format using the cv_bridge library.
  2. Define color range and mask: In this code, we will take the blue color target as an example for tracking. First, define a range object in OpenCV to represent the color range. Then, use the inRange function in OpenCV to convert the image to a binary mask, which filters out the target region for further processing.
  3. Detect and draw bounding boxes: The target region in the mask may contain noise and other non-target regions. To identify the exact position of the target region, you can use the findContours function in OpenCV to find the contours and use the boundingRect function to calculate the bounding box of the target region. Then, use the rectangle function to draw the bounding box on the original image.
  4. Publish the target position: Lastly, you can use a publisher in ROS2 to publish the target position to other nodes for further control and navigation.