Posted on December 1, 2020
The products and services we use in our daily lives have to abide by safety and security standards, from car airbags to greenhouse gas emissions to construction materials. But no such broad, internationally agreed-upon standards for artificial intelligence exist.
And yet, AI tools and technologies are steadily being integrated into all aspects of our lives. AI’s potential benefits to humanity, such as improving health-care delivery or tackling climate change, are immense. But harms caused by AI tools—from algorithmic bias to labour displacement to risks associated with autonomous vehicles and weapons—have led to a lack of trust in AI.
To tackle these problems, a new partnership between non-profit AI Global and the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto will create a globally recognized certification mark for the responsible and trusted use of AI systems.
In collaboration with the World Economic Forum’s Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning platform, the partnership will convene industry actors, policy-makers, civil society representatives, and academics to build a universally recognized framework that validates AI tools and technologies as responsible, trustworthy, ethical, and fair.
An urgent need to ensure AI’s actions align with what humans would want
“In addition to our fundamental multidisciplinary research, the Schwartz Reisman Institute also aims to craft practical, implementable, and globally appealing solutions to the challenge of building responsible and inclusive AI,” says Gillian K. Hadfield, the director of the institute and also a professor at U of T’s Faculty of Law and Rotman School of Management.
Hadfield’s current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies. She also works on “the alignment problem”: a term which refers to the ideal that an AI’s actions should align with what humans would want.
“One of the reasons why we’re excited to partner with AI Global is that they’re focused on building tangible, usable tools to support the responsible development of AI,” says Hadfield. “And we firmly believe that’s what the world currently needs. The need for clear, objective regulations has never been more urgent.”
The Schwartz Reisman Institute and AI Global’s partnership will help build a much-needed global consensus
A wide variety of initiatives have already sought to drive AI development and deployment in the right directions: governments around the world have established advisory councils or created rules for singular AI tools in certain contexts, NGOs and think tanks have published sets of principles and best practices, and private companies like Google have released official statements about the ways in which their AI practices pledge to be “responsible.”
But none of these initiatives amount to enforceable and measurable regulations. Furthermore, there isn’t always agreement between regions, sectors, and stakeholders about what exactly is “responsible” and why.
“We’ve heard a growing group of voices in recent years sharing insights on how AI systems should be built and managed,” says Ashley Casovan, executive director of AI Global. “But the kinds of high-level, non-binding principles we’ve seen proliferating are simply not enough given the scope, scale, and complexity of these tools. It’s imperative that we take the next step now, pulling these concepts out of theory and into action.”
An independent and authoritative certification program
A global certification mark like the one being built by SRI and AI Global is this next step.
“Recognizing the importance of an independent and authoritative certification program working across sectors and across regions, this initiative aims to be the first third-party accredited certification for AI systems,” says Hadfield.
So how will it work?
First, experts will examine the wealth of existing research and calls for global reform in order to define the key requirements for a global AI certification program. Next, they’ll design a framework to support the validation of the program by a respected accreditation body or bodies. They’ll also design a framework for independent auditors to assess AI systems against the requirements for global certification. Finally, the framework will be applied to various use cases across sectors and regions.
“AI should empower people and businesses, impacting customers and society fairly while allowing companies to engender trust and scale AI with confidence,” says Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum. “Industry actors that receive certification would be able to show that they have implemented credible, independently-validated, and tested processes for the responsible use of AI systems.”
The project will unfold over a 12 to 18 month timeline, with two global workshops scheduled for May and November of 2021. For more information about a virtual kick-off event on December 9, 2020, email email@example.com.
The Schwartz Reisman Institute for Technology and Society at the University of Toronto was founded in 2019 thanks to a historic, $100-million gift from Gerald Schwartz and Heather Reisman. The institute hit the ground running, quickly appointing Professor Gillian Hadfield as the inaugural director and launching research that draws on U of T’s signature strengths in the sciences, humanities and social sciences to ensure that technologies and social structures work together to improve all aspects of life. The institute will eventually be housed in the iconic Schwartz Reisman Innovation Centre, currently under construction.