NAV2 is the professionally supported successor to the ROS 1 Navigation Stack. This project seeks to find a safe way to have a mobile robot move to complete tasks through an environment and classes of robot kinematics. The NAV2 stack allows for movement from Point A to Point B, intermediary poses, performs object following, and more.
The classes provide perception, planning, control, localization, visualization, and more to build a reliable autonomous system.
NAV2 uses behavior trees to create customized and intelligent behavior using modular servers. The expected inputs to NAV2 are TF transformations, a map source, a BT XML file, and relevant sensor data sources. The diagram below shows some of the structure of how the NAV2 stack architecture works.
<aside> 💡 Information contained within the following documentation was referenced from the official NAV2 documentation site: navigation.
</aside>
Having a good understanding of ROS 2
is necessary before beginning to experiment with the NAV 2 system. Complete the instructions in the Getting Started with ROS2 tutorials before proceeding. In the pages below, key navigation concepts are introduced and explained. Having a strong understanding of these concepts will be valuable in understanding the workings of your NAV 2 system for debugging and development.
<aside> 👉 Follow Creating a Robot in Simulation or clone tracer_ros2 & hunter_ros2 to get started with Simulation Guide.
</aside>
The following section provides the information needed to configure a simulated robot that can be controlled through NAV 2 in a Gazebo
environment. We are going to use the teleop
robot simulations built in the Creating a Robot in Simulation tutorial as a starting point to begin adding the necessary sensors and packages. Before adding these sensors, make sure that an accurate TF
tree for your robot base is set up.
URDF
The flexibility of the NAV 2 stack means that the functions, planning, and movement commands generated by navigation nodes can be sent to a physical robot base in the same way they can be sent to a simulated robot. Using NAV 2 on a physical robot allows the system to apply all the navigation principles explored above to a real environment. To create the same result as creating sensors in a URDF
, the robot's physical sensors have to be launched and attached to a ROS system to view the ROS topics published with sensor information. Below are resources for launching some common sensors for physical robots.
The sensors installed on a robot are used for navigation in two main ways: Localization and Mapping. Robot mapping is the process of inferring a spatial model of the robot's environment, given the inputted sensor data. A robot will create either a ground map (indoor mapping) or an environment map (outdoor mapping) to represent obstacles, boundaries, and safe areas for movement. When beginning navigation in a new environment, a map must first be built by moving the robot around its environment and collecting sensor data and corresponding odometry information.
Robot localization is the process of determining where a mobile robot is located with respect to its environment. Localization is one of the most fundamental components of navigation as accurate knowledge of a robot’s position is needed to plan maneuvers and actions that will not interfere with the environment around it.
<aside> 💡 One comment notation for robotic localization is SLAM (Simultaneous Localization and Mapping) where an autonomous vehicle builds a map and localizes at the same time.
</aside>