MIT researchers have released a collection of AI hazards.

August 15, 2024
Brian

Which specific hazards should an individual, company, or government consider when implementing an AI system or developing regulations to control its use? It is not a simple question to answer. If an AI has control over essential infrastructure, there is a clear risk to human safety. But what about an AI that can grade exams, sort resumes, and validate travel documents at immigration? Those each pose their own distinct risks, albeit no less serious.

Policymakers have failed to reach an agreement on which hazards should be addressed in laws regulating AI, such as the EU AI Act or California's SB 1047. MIT researchers have created an AI "risk repository" - a database of AI dangers — to serve as a guidepost for them, as well as stakeholders from the AI industry and academia.

"This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible, and categorized risk database that anyone can copy and use, and that will be kept up to date over time," Peter Slattery, a researcher at MIT's FutureTech group and project lead on the AI risk repository, told TechCrunch. "We created it now because we needed it for our project, and had realized that many others needed it, too."

According to Slattery, the AI risk repository, which covers over 700 AI risks organized by causal variables (e.g., intentionality), domains (e.g., discrimination), and subdomains (e.g., disinformation and cyberattacks), arose from a desire to better understand the overlaps and disconnects in AI safety research. Other risk frameworks exist. However, they barely address a portion of the dangers listed in the repository, according to Slattery, and these omissions could have far-reaching ramifications for AI development, usage, and policy.

"People may assume there is a consensus on AI risks, but our findings suggest otherwise," she said. "We discovered that the average frameworks mentioned only 34% of the 23 risk subdomains we identified, with roughly one-quarter covering less than 20%. No publication or summary referenced all 23 risk subdomains, with the most complete covering only 70%. When the literature is so fragmented, we can't assume that everyone is on the same page concerning these hazards."

To create the repository, MIT researchers collaborated with partners from the University of Queensland, the nonprofit Future of Life Institute, KU Leuven, and AI company Harmony Intelligence to search academic databases and retrieve thousands of publications on AI risk evaluations.

The researchers discovered that certain dangers were referenced more frequently in the third-party frameworks they surveyed. For example, more than 70% of the frameworks addressed the privacy and security aspects of AI, whereas just 44% dealt with misinformation. While more than half addressed the types of prejudice and misrepresentation that AI could perpetuate, just 12% discussed "pollution of the information ecosystem" — i.e., the growing volume of AI-generated spam.

"A takeaway for researchers and policymakers, and anyone working with risks, is that this database could provide a foundation to build on when doing more specific work," Slattery told the audience. "Before, folks like us had two options. They may spend a large amount of time reviewing the dispersed literature to generate a thorough overview, or they could rely on a small number of current frameworks that may overlook pertinent hazards. Now that they have a more comprehensive database, our repository should save time and improve supervision."

But will anybody use it? True, AI regulation around the world today is, at best, a patchwork of disparate approaches with disjointed intentions. Would anything have changed if there had been an AI risk repository like MIT's? Could it have? That's difficult to say.

Another legitimate concern is whether simply agreeing on the hazards that AI poses is sufficient to inspire action toward adequately regulating it. Many AI system safety evaluations have serious shortcomings, and a risk database alone will not fix the problem.

The MIT researchers want to try, however. Neil Thompson, leader of the FutureTech lab, tells us that the group intends to use the repository in its next phase of study to assess how successfully certain AI threats are managed.

"Our repository will help us in the next step of our research, when we will be evaluating how well different risks are being addressed," according to Thompson. "We intend to use this to detect gaps in organizational responses. For example, if everyone concentrates on one form of risk while ignoring others of equal concern, we should take note and address it.