FMF Announces First-Of-Its-Kind Information-Sharing Agreement
Frontier Model Forum
28th March 2025
The Frontier Model Forum (FMF) is proud to announce that all of its member firms have signed a first-of-its-kind agreement designed to facilitate information-sharing about threats, vulnerabilities, and capability advances unique to frontier AI.
Information-sharing has always been central to the FMF’s mission and purpose. At its launch in July 2023, the Forum was given a mandate to establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks.
Over the past year, the FMF has worked with its member firms’ experts and counsel to define the scope of voluntary inter-firm information-sharing related to frontier AI safety and security, and establish necessary legal protections. Critically, this information-sharing is narrowly scoped and intended to manage key risks to national security and public safety, including chemical, biological, radiological, and nuclear (CBRN) and advanced cyber threats.
We have also explored preliminary briefings of key lessons from member firm publications. This included Anthropic’s publication of several jailbreaking vulnerabilities and OpenAI’s threat intelligence reports, where we facilitated responsible notification of researcher counterparts in other firms.
At this stage, our information-sharing covers the following three key categories and is restricted to FMF member firms:
- Vulnerabilities, weaknesses, and exploitable flaws. Vulnerabilities, weaknesses or exploitable flaws may compromise the safety, security, or intended functionality of frontier AI models. Examples may include jailbreaks, adversarial inputs, data poisoning, or other attempts to bypass model safeguards.
- Threats. Threats to frontier AI comprise threats directed to the unauthorized access or manipulation of frontier AI models. Examples may include potential threat actors, attack vectors, or cyber-threat indicators.
- Capabilities of Concern. Capabilities of concern refer to frontier AI capabilities that have the potential to cause large-scale harm to society. Examples may include capabilities related to the development of CBRN threats, offensive cybersecurity attacks, and model autonomy.
The FMF aims to refine its information-sharing function over time. “Responsible disclosure is core to our mission of advancing frontier AI safety and security,” said Chris Meserole, Executive Director of the FMF. “We’re excited to establish mechanisms for sharing information about the unique vulnerabilities, threats, and capabilities of frontier AI and we look forward to building on them in a way that supports collaboration with the broader frontier AI ecosystem.”