Skip to Main Content
Professor Steven Waslander (UTIAS) joined the University of Toronto on May 1, 2018. He is a leading expert in control systems for aerial and terrestrial robotics. (Courtesy: Steven Waslander)

Professor Steven Waslander has spent his career to date both designing robots, and building in the brains that allow them to operate autonomously. As a graduate student he worked on some of the first quadcopter aerial robots, and later developed an autonomous car sophisticated enough to drive on public roads. Now, he’s bringing his fleet of flying and driving robots to the University of Toronto Institute for Aerospace Studies (UTIAS) where he will lead the Toronto Robotics and AI Laboratory (TRAILab).

Marit Mitchell sat down with Waslander to learn more about the new challenges for autonomous driving and drones, and his hopes for the Canadian robotics scene.


Welcome to U of T! Can you share a bit about your background?

I’m officially joining UTIAS this May, having spent 10 years with the University of Waterloo in the Department of Mechanical and Mechatronics Engineering. I received my BScE in Applied Mathematics from Queen’s University in 1998, then worked for three years at Pratt & Whitney Canada in turbofan controls. I completed my Master’s, PhD and a brief post-doc at Stanford University from 2001 to 2008. We were one of the first groups to start building and flying quadrotors outdoors, and we found that, due to the specific configuration of rotors, it was really easy to build, fly and control them — that’s when the whole field of aerial robotics really exploded.

What’s most interesting to me now in terms of research is how we best increase the levels of autonomy for robotic systems to allow them to interact and behave in ways that appear natural and efficient to humans, in ways that help us in everyday life. Drones have become a convenient aerial platform that can really go anywhere, hover, inspect, and complete arbitrary tasks, so I find identifying new applications for drones really interesting. Nonetheless, it’s really hard to put all the computing you need on a small quadrotor in order to truly exhibit complex autonomous behaviours.

Because of these limits, robotics has progressed rapidly in larger vehicles that can accommodate more computers, manipulators and sensors, and therefore do more complicated things. A prime example of this would be autonomous driving, which emerged in the mid-2000s and has become an enormous pull for the entire field, including myself. I get to take the same algorithms and approaches that I’ve been developing for quadrotor drones, and immediately try them out on larger vehicles with far fewer limitations.

What is your current research direction?

I have two main applications that my team is focused on: autonomous driving and aerial inspection.

On the autonomous driving side, the research pushes are in perception for autonomous driving, specifically for 3D object detection — detecting pedestrians, bicycles, cars, trucks, including their class, their extent and their position on the road, while in motion. We’re trying to build neural networks that fuse together multiple channels of information in a principled manner, providing both object state and uncertainty estimates to better aggregate detections from multiple sources. And we’re simultaneously minimizing network size and computational requirements to ensure the best approaches can all run on board our self-driving test vehicle at the required update rates needed to operate on public roads. This real-world autonomous driving testing pushes the way we develop the neural networks away from the mainstream of the computer vision community benchmarks, and more toward what is useful for a wide range of robotics applications.

I also continue to work heavily in simultaneous localization and mapping  — SLAM for short — for drones, and now we’re incorporating gimballed cameras into the process. Today’s drones are now being built specifically for filmography, with these beautiful HD or 4K image-stabilized cameras on board. My group is looking at using those cameras to improve robotics motion tracking and SLAM performance — taking that gimballed information when it’s not being used for videography and fusing it into onboard state estimation, then identifying the most promising directions to look in order to best refine state estimates and improve vehicle control as a result. Further, we are investigating methods to identify obstacles in our path from vision data alone, in computationally efficient ways that are robust to illumination changes.  The main goal is to reduce the number of cameras needed onboard for safe autonomous flight, because the cameras we use are providing better information. That, in turn, reduces the vehicle’s weight, which improves flight time, lowers energy requirements, reduces battery weight — it opens up a whole new level of operational capabilities. 

And where will you focus your teaching?

I love teaching, I’m excited about the EngSci students in the robotics stream in particular — I think it’s an extremely popular stream and for good reason. There are some great courses that I’ve already seen, and I’ll start with the autonomous mobile robotics course in the winter. I’m also excited to teach at the graduate level, where I will offer the existing state estimation course at UTIAS this fall, and I’ve been building a deep-learning for autonomous driving course that I hope to bring to UTIAS in the near future. This course is focused on deep learning challenges specific to autonomous driving, such as lane, sign, vehicle and pedestrian detection, fusion of multiple sensor streams and robust learning in the presence of adverse weather. So a strong Canadian perspective on the state-of-the-art in perception for autonomous vehicles.

What appealed to you about U of T?

The primary aspect that attracts me to Toronto, and UTIAS in particular, is the group that I’m joining: to have five field roboticists with such similar and complementary interests is tremendously exciting. Angela Schoellig, Tim Barfoot, Jonathan Kelly (all UTIAS) and I are all on the NSERC Canadian Field Robotics Network, so through that we’ve been working together and consulting with each other for over five years. That was a big part of why I came — Toronto is quickly emerging as a robotics and AI powerhouse, and I hope to be able to help increase our impact worldwide.

I see a lot of opportunities here for my students as well, with so many great collaborators and advanced courses in AI and robotics. To be able to expose my students to so many other labs and researchers working in similar areas will greatly accelerate their training and help them find exciting projects. The Institute has wonderful facilities, including a flight arena, outdoor test areas for drones and ground rovers, and a large dome used for field robotics all year round, making performing experiments extremely convenient.

What do you have in mind for the next year?

We’re just finishing a renovation to new robotics lab space at UTIAS — it’s a beautiful, clean, open space that will promote interaction between the students and draw in other teams, and we’ll have direct access to the outdoors for testing and validation. So we’ll spend the first months moving in and setting up. I have eight students joining me from Waterloo in September, and I’m hoping to build up my group from there with new talent. I’m aiming to achieve 100 kilometres of public road drive experiments this summer, in preparation for a big push through the winter of autonomous driving in ugly winter conditions. I’d also like to initiate a few big grant efforts where multiple faculty are working together on longer-term robotics challenges. I want to see the Canadian robotics scene flourish and become truly world-class — that’s really what I’m hoping to build going forward.

This conversation has been condensed and edited for length.

Media Contact

Fahad Pinto
Communications & Media Relations Strategist
416.978.4498