Frontier Model Forum
What the Frontier Model Forum will do
Governments and industry agree that, while advanced AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others. To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.
Core objectives of the Forum
As pioneers in the AI landscape, the Frontier Model Forum is committed to turning vision into action. We recognize the importance of safe and responsible AI development, and we’re here to make it happen
Advancing AI safety research
- Research will help promote the responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
Identifying best practices
- Best practices for the responsible development and deployment of frontier models are essential, as is helping the public understand the nature, capabilities, limitations, and impact of the technology.
Collaborating across sectors
- Policymakers, academics, civil society, and companies must work together and share knowledge about trust and safety risks.
Help AI meet society’s greatest challenges
- Support efforts to develop applications to address issues like climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Frontier Model Forum: What is Red Teaming?
Learn more: