FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Frontier risk and preparedness

To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.

As part of our mission of building safe AGI, we take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence. In July, we joined other leading AI labs in making a set of voluntary commitments to promote safety, security and trust in AI. These commitments encompassed a range of risk areas, centrally including the frontier risks that are the focus of the UK AI Safety Summit. As part of our contributions to the Summit, we have detailed our progress on frontier AI safety, including work within the scope of our voluntary commitments.

Our approach to preparedness

We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks. Managing the catastrophic risks from frontier AI will require answering questions like:

  • How dangerous are frontier AI systems when put to misuse, both now and in the future?
  • How can we build a robust framework for monitoring, evaluation, prediction, and protection against the dangerous capabilities of frontier AI systems?
  • If our frontier AI model weights were stolen, how might malicious actors choose to leverage them?

We need to ensure we have the understanding and infrastructure needed for the safety of highly capable AI systems.

Our new Preparedness team

To minimize these risks as AI models continue to improve, we are building a new team called Preparedness. Led by Aleksander Madry, the Preparedness team will tightly connect capability assessment, evaluations, and internal red teaming for frontier models, from the models we develop in the near future to those with AGI-level capabilities. The team will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories including:

  • Individualized persuasion
  • Cybersecurity
  • Chemical, biological, radiological, and nuclear (CBRN) threats
  • Autonomous replication and adaptation (ARA)

The Preparedness team mission also includes developing and maintaining a Risk-Informed Development Policy (RDP). Our RDP will detail our approach to developing rigorous frontier model capability evaluations and monitoring, creating a spectrum of protective actions, and establishing a governance structure for accountability and oversight across that development process. The RDP is meant to complement and extend our existing risk mitigation work, which contributes to the safety and alignment of new, highly capable systems, both before and after deployment.

Join us

Interested in working on Preparedness? We are recruiting exceptional talent from diverse technical backgrounds to our Preparedness team to push the boundaries of our frontier AI models.

Preparedness challenge

To identify less obvious areas of concern (and build the team!), we are also launching our AI Preparedness Challenge for catastrophic misuse prevention. We will offer $25,000 in API credits to up to 10 top submissions, publish novel ideas and entries, and look for candidates for Preparedness from among the top contenders in this challenge.

Enter the Preparedness Challenge

Learn More:

OpenAI Launching Team Preparing For AI’s ‘Catastrophic Risks,’ Like Biological And Nuclear Threats – Forbes

TOPLINE OpenAI said Thursday it’s building a team called Preparedness to oversee and evaluate the development of what it calls “frontier artificial intelligence models”—which it classifies as highly capable models with potentially dangerous abilities—to watch out for “catastrophic risks” in categories such as cybersecurity and nuclear threats.

  • The team, which is part of OpenAI, will be responsible for monitoring the company’s AI models to keep them in line with safety guardrails the company says it is committed to.
  • Some of the examples it lists in a blog post is the risk AI models can persuade human users through language, and do autonomous tasks.
  • The company also wants to look out for what some AI experts have referred to as “extinction” level threats like pandemics and nuclear war.
  • Preparedness will be led by Aleksander Madry, who is the director of MIT’s Center for Deployable Machine Learning, but is now on leave to be at OpenAI.
  • Another team mission is creating and maintaining what it’s calling a “Risk-Informed Development Policy” that will outline how the company should handle risks posed by AI models as they advance and approach “artificial general intelligence”—or close to human-level knowledge.
  • OpenAI is also hosting a challenge for people outside of the company to send ideas on possible ways AI can be misused and cause real-world harm.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.