Visual Teach and Repeat 3 navigation framework: Now open source!

Visual Teach and Repeat (VT&R) is a navigation system for mobile robots developed and maintained by Timothy Barfoot and his team at the Autonomous Space Robotics Lab (ASRL) at the University of Toronto Institute for Aerospace Studies (UTIAS). The Visual Teach and Repeat 3 (VT&R3) package, which is the C++ implementation of the Visual Teach and Repeat system for robot navigation with a camera or LiDAR sensor, is now available on github.

 

In the teach phase, a user drives the robot manually to teach a path, while the system builds a map. Afterwards, during the repeat phase, the map is used for localization as the robot follows the path autonomously. VT&R relies on local submaps only, which facilitates repetition of long paths without the need for accurate global reconstruction.

VT&R handles long-term navigation with the use of multi-experience localization. Each time the robot repeats a path, data from this new experience is stored in the spatio-temporal pose graph. Previously collected experiences can then be used to bridge the appearance gap for localization to the map as the environment changes.

VT&R3 is a C++ implementation of Visual Teach and Repeat. It is designed for easy adaptation to various robots and sensors, such as camera, LiDAR, RaDAR, or GPS. The current implementation includes a feature-based pipeline that uses a stereo camera, as well as a point-cloud-based pipeline for LiDAR sensors. More detailed description of VT&R3 can be found in this wiki page.

See github for details.