Skip to Main Content
Clearpath Grizzly, an autonomous robot from Professor Tim Barfoot’s lab at the University of Toronto’s Institute for Aerospace Studies (UTIAS), automatically repeats a route at the Canadian Space Agency offices in Longueuil, Que. using only stereo vision for feedback (i.e., without GPS). (Photo: Francois Pomerleau)

This story is the third in a three-part series on the University of Toronto’s Institute for Aerospace Studies, produced over the winter and spring of 2016.

This week, Google, Ford and Uber united behind a push to speed up the implementation of self-driving cars. But in Professor Tim Barfoot’s lab at the University of Toronto’s Institute for Aerospace Studies (UTIAS), mobile robots have been driving themselves for years.

 The techniques developed by Barfoot’s team apply not only to self-driving cars, but also robots designed to explore places too dangerous for humans, from mining sites to Mars.

 “For any application, it’s fundamentally important to know where the robot is,” says Barfoot. “One way we often think of solving that problem is using a GPS. Unfortunately, a GPS is not accurate enough to solve any real robot problem.”

 GPS doesn’t work inside mines, under the ocean or on other planets. Even in cities, it’s often unreliable and imprecise. “It’s good enough to know which block you’re on, but not good enough to know what lane you’re in,” he says.

Instead, Barfoot and his team focus on developing systems that allow robots to move safely through the world based on visual sensors. They use cameras similar to those found in smartphones, but also more sophisticated systems like light detection and ranging (LIDAR).

 “LIDAR is basically a laser that spins around on top of the robot, and measures the distance between it and objects around it, collecting what we call point cloud data,” says Barfoot. “If we stitch all of that together, we can build a 3D model of the environment that the robot is driving through.”

This robot is using LIDAR to create a 3D map of its surroundings, in this case, the UTIAS campus.

Such maps can help robots determine where they are, but according to Barfoot, this is only the first problem that needs to be solved. The next steps are to plan a safe route to get from A to B, and then issue commands to wheels and motors to execute that plan. Barfoot and his team have developed a technique called “visual teach and repeat” that solves all three problems.

First, a robot is equipped with a visual device, for example, a video camera. Then, a human operator drives the robot along a route he or she knows to be safe, while the robot logs the visual information associated with that route. By comparing its own location to that of the previously taught route, the robot can adjust its actions to stay on the right path.

This video footage, provided by CBC News, shows a Clearpath Grizzly mobile robot, the Skull Crusher, autonomously repeating a previously taught route using stereo vision and the visual teach and repeat (VT&R) technique.

“You can think of it as the human programming the robot in a really intuitive way, without ever having to type code,” says Barfoot. He gives the example of a mining truck: a human driver could “teach” the robot all the safe routes between given pickup and drop-off points, then the robot could find its own way to any point along the previously-taught route.

Barfoot’s lab typically uses off-the-shelf components, but there are limits to what they can do. Recently, his team custom-designed and built the Tethered Robotic Explorer (TReX) robot to rappel down nearly vertical surfaces.

The Tethered Robotic Explorer (TReX) is a robotic platform used for multiple research topics such as tethered control, constrained navigation, 3D mapping and structure inspections.

“We wanted to be able to access really steep terrain and we couldn’t find a platform that was able to do that, so we built our own,” says Barfoot. “The long-term vision is to attach the robot to an anchor, push one button, and have it automatically navigate down the surface of the cliff, find all the places that it can get to and build a beautiful 3D model that you can render from any perspective.”

The team has been collaborating with geologists at Western University who are interested in using TReX to explore cliff faces where different geological layers are exposed. They are also considering working with Hydro Quebec who would like to use TReX to inspect the surface of hydroelectric dams.

Working with researchers from other backgrounds is important to Barfoot. In addition to his own work, Barfoot is a member of the steering committee of the Institute for Robotics and Mechatronics, a multidisciplinary network of robotics researchers from across U of T Engineering and the Department of Computer Science. “It’s still early days, but I think it’s a good step forward for the University of Toronto to try and bring all of robotics under one roof,” says Barfoot. “We’d like to build a dialogue between robotics researchers in the many different departments.”

As for the future, Barfoot believes that most of the technical requirements to see wider application of self-driving robots are already in place. “The main bottlenecks have to do with human factors, liability, insurance and that kind of thing,” he says. “We are just waiting for computers to get a little faster and sensors to get a little better, and then we’re going to have self-driving cars.”

 

Media Contact

Fahad Pinto
Communications & Media Relations Strategist
416.978.4498