As the Schwartz Reisman Institute for Technology and Society (SRI) enters its sixth year in operation, world-leading computer security expert Professor David Lie (ECE) will take on the role of the institute’s director; while Professors Roger Grosse and David Duvenaud — two renowned artificial intelligence safety experts — are being appointed Schwartz Reisman Chairs in Technology and Society.
Lie is a professor at the Edward S. Rogers Sr. Department of Electrical & Computer Engineering in the Faculty of Applied Science & Engineering at the University of Toronto and a Tier 1 Canada Research Chair in Secure and Reliable Systems. He also holds cross-appointments in U of T’s Department of Computer Science and Faculty of Law.
Grosse and Duvenaud are both associate professors in the Department of Computer Science at U of T and founding members of the Vector Institute where they are currently faculty members and Canada CIFAR AI Chairs.
Lie, Grosse and Duvenaud succeed inaugural SRI director and chair Gillian Hadfield, whose five-year term as Schwartz Reisman Chair ends in June of 2024. Hadfield stepped down as SRI’s director in December of 2023, and the role of interim director has been filled for the past six months by Kelly Lyons, a professor in the Faculty of Information with a cross-appointment to the Department of Computer Science. The chairship that Hadfield held has now been increased from one appointment to two for Grosse and Duvenaud.
Lie, who has served as a research lead at SRI, says his decades of research on making computer systems more secure and trustworthy — including contributions to computer architecture, formal verification, techniques using operating systems and networking — have equipped him to tackle the complex issues posed by AI, which will require researchers to anticipate and adapt to the unexpected.
“As AI become more powerful, they may do things — or are already doing things — that we didn’t anticipate or expect,” says Lie. “Bringing cybersecurity skills, thinking and tools into the AI safety discussion will be absolutely critical to solving the problem.”
Lie emphasizes that interdisciplinary collaboration is key to addressing potential AI disruption, noting that it has been pivotal in his own research and other roles.
His current research focuses on securing mobile platforms, cloud computing security and bridging the divide between technology and policy. He is also an associate director at the Data Sciences Institute, a U of T institutional strategic initiative, a faculty affiliate at the Vector Institute for Artificial Intelligence and a senior fellow at Massey College.
“It’s really one of the things that I love about a place like U of T, because it’s big and you have experts in every imaginable field to collaborate with,” he says. “I feel very strongly that we can always accomplish way more together than we can individually. That’s true for people, but that’s also true for disciplines.”
As incoming Schwartz Reisman Chairs in Technology and Society, Grosse and Duvenaud have vital roles to play in driving SRI’s research agenda and sharing its findings with the world, says Lie.
“One of the main ways universities contribute to society is through research, but we also contribute through discourse; we contribute by translating knowledge and providing that to policymakers, decision-makers and stakeholders,” he says. “I see SRI playing an important part in these roles.”
Grosse and Duvenaud are both globally renowned experts in AI safety — a field of research dedicated to ensuring that AI systems behave safely and reliably, with minimal risk of unintended consequences or harm to humanity. In addition to their academic roles at U of T, they are both currently working with Anthropic, an AI safety and research company building reliable, interpretable and steerable AI systems. Grosse helped to develop a new AI safety tool on Anthropic’s Alignment team, while Duvenaud is working on designing evaluations and mitigations for possible catastrophic misalignment of future AI models.
Grosse and Duvenaud join Department of Computer Science colleague and continuing SRI Associate Director Sheila McIlraith, also a Canada CIFAR AI Chair at the Vector Institute, whose research in human-compatible AI and AI alignment adds strength and breadth to the technical team. McIlraith has played a leadership role at SRI in positioning the institute for increasingly targeted focus on issues in AI safety.
“David Lie is the ideal person to lead SRI at this moment, and I’m thrilled that he will be serving in this role,” says Lyons.
“He is the paragon of interdisciplinary collaboration, a distinguishing hallmark of SRI. We pride ourselves on the wide variety of academic disciplines represented in the SRI community — from economics to philosophy to computer science and beyond — and David will now lead this diverse and robust community as we head into our next exciting chapter.”
“Furthermore, Roger Grosse and David Duvenaud are both doing incredibly important work in AI safety, governance and alignment,” adds Lyons.
“Their prestigious appointments as intellectual thought leaders of SRI will ensure that we — our institute, the University of Toronto, and Canada at large — will lead the way in ensuring that powerful technologies are deployed responsibly, safely and for public benefit.”
Grosse and Duvenaud’s appointments coincide with steadily increasing global debate around the future potential risks posed by advanced AI systems, with expert commentary on the topic coming from founders in the field of deep learning such as Geoffrey Hinton — an SRI Advisory Board member. Hinton’s November 2023 lecture on whether digital intelligence would replace biological intelligence drew a full house at Convocation Hall.
“With initiatives like the Canadian federal government’s intent to create a new AI safety institute and the U.S.’s new program to advance sociotechnical testing and evaluation of AI, it couldn’t be a better time to have Roger Grosse and David Duvenaud bring their leading voices in this area to our community — and the world,” says SRI Executive Director Monique Crichlow.
“And as we strive to not only mitigate the harm that can be caused by advanced technologies, but also create publicly-beneficial applications of advanced AI systems, David Lie’s expertise in protecting advanced computing technologies from malicious attacks, data breaches and unauthorized access will be crucial as well,” says Crichlow.