As the freight train of generative AI speeds along, it’s important to analyze and prepare for any risks along the way. I’m grateful that organizations like MIT are making a concerted effort to research on the topic.
MIT researchers have taken a significant step towards understanding and managing the risks associated with AI by releasing an AI “risk repository.” This database, developed by MIT’s FutureTech group and led by Peter Slattery, aims to provide a comprehensive, categorized, and publicly accessible collection of over 700 AI risks. These risks are grouped by causal factors, domains, and subdomains, making it easier for stakeholders to navigate and utilize the information.
According to Slattery, the repository was created to address the fragmented nature of existing AI risk frameworks. The researchers found that most frameworks only covered a fraction of the identified risks, with the most comprehensive ones addressing just 70% of the 23 risk subdomains. This fragmentation can lead to significant gaps in AI safety research and policymaking.
To build the repository, the MIT team collaborated with colleagues from the University of Queensland, the Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence. They scoured academic databases to retrieve thousands of documents related to AI risk evaluations. Their findings revealed that while privacy and security implications of AI were mentioned in over 70% of the frameworks, only 44% covered misinformation, and a mere 12% addressed the “pollution of the information ecosystem” caused by AI-generated spam.
How It Works
The AI risk repository functions as a database where users can search for specific risks associated with AI systems. The risks are categorized by causal factors, such as intentionality, and domains like discrimination. This categorization helps users quickly find relevant information and understand the broader context of each risk.
Benefits
- Provides a comprehensive overview of AI risks, saving time for researchers and policymakers.
- Helps identify gaps in current AI safety research and frameworks.
- Facilitates more informed decision-making in AI development and regulation.
Concerns
- The repository’s effectiveness depends on its adoption by stakeholders in the AI industry and academia.
- Simply identifying risks may not be enough to spur competent regulation and safety evaluations.
- There is a risk that some important AI risks might still be overlooked.
Possible Business Use Cases
- AI Risk Assessment Consultancy: Offer services to companies to evaluate their AI systems using the repository to identify and mitigate potential risks.
- AI Compliance Software: Develop software that helps organizations ensure their AI systems comply with regulatory standards by referencing the repository.
- AI Risk Education Platform: Create an online platform that educates stakeholders about AI risks and how to manage them, using the repository as a primary resource.
As we continue to integrate AI into various aspects of our lives, how can we ensure that we are not only aware of the risks but also effectively addressing them to create a safer and more equitable future?
Image Credit: DALL-E