EU AI ACT: GENERAL PURPOSE AI
Code of Practice
FINAL VERSION · 10/07/2025
DISCLAIMER & CONTRIBUTIONS
This website is an unofficial, best-effort service to make the Code of Practice more accessible to all observers and working group participants. It contains the full text of the final version as well as two FAQs and an Explainer on parts of the Code. The respective Chairs and Vice Chairs have written these to address questions posed by many stakeholders.
While I strive for accuracy, this is not an official legal document. Always refer to the official PDF documents provided by the AI Office for the authoritative version (and read their GPAI Q&A and CoP Q&A). Any discrepancies between this site and the official documents should be considered errors on my part.
To help me improve this site please report issues or submit pull requests on Github and feel free to reach out about anything else via email.
Thanks for your support.
Alexander Zacherl
Objectives
The overarching objective of this Code of Practice (“Code”) is to improve the functioning of the internal market, to promote the uptake of human-centric and trustworthy artificial intelligence (“AI”), while ensuring a high level of protection of health, safety, and fundamental rights enshrined in the Charter, including democracy, the rule of law, and environmental protection, against harmful effects of AI in the Union, and to support innovation pursuant to Article 1(1)AI Act.
To achieve this overarching objective, the specific objectives of this Code are:
- To serve as a guiding document for demonstrating compliance with the obligations provided for in Articles 53 and 55 of the AI Act, while recognising that adherence to the Code does not constitute conclusive evidence of compliance with the obligations under the AI Act.
- To ensure providers of general-purpose AI models comply with their obligations under the AI Act and to enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code to demonstrate compliance with their obligations under the AI Act.
Commitments by Providers of General-Purpose AI Models
Transparency Chapter
Commitment 1 Documentation
In order to fulfil the obligations in Article 53(1), points (a) and (b), AI Act, Signatories commit to drawing up and keeping up-to-date model documentation in accordance with Measure 1.1, providing relevant information to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems (‘downstream providers’ hereafter), and to the AI Office upon request (possibly on behalf of national competent authorities upon request to the AI Office when this is strictly necessary for the exercise of their supervisory tasks under the AI Act, in particular to assess the compliance of a high-risk AI system built on a general-purpose AI model where the provider of the system is different from the provider of the model), in accordance with Measure 1.2, and ensuring quality, security, and integrity of the documented information in accordance with Measure 1.3. In accordance with Article 53(2) AI Act, these Measures do not apply to providers of general-purpose AI models released under a free and open-source license that satisfy the conditions specified in that provision, unless the model is a general-purpose AI model with systemic risk.
Copyright Chapter
Commitment 1 Copyright policy
In order to demonstrate compliance with their obligation pursuant to Article 53(1), point (c) of the AI Act to put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790, Signatories commit to drawing up, keeping up-to-date and implementing such a copyright policy. The Measures below do not affect compliance with Union law on copyright and related rights. They set out commitments by which the Signatories can demonstrate compliance with the obligation to put in place a copyright policy for their general-purpose AI models they place on the Union market.
Commitments by Providers of General-Purpose AI Models with Systemic Risk
Safety and Security Chapter
Commitment 1 Safety and Security Framework
Signatories commit to adopting a state-of-the-art Safety and Security Framework (“Framework”). The purpose of the Framework is to outline the systemic risk management processes and measures that Signatories implement to ensure the systemic risks stemming from their models are acceptable.
Signatories commit to a Framework adoption process that involves three steps:
- creating the Framework (as specified in Measure 1.1);
- implementing the Framework (as specified in Measure 1.2); and
- updating the Framework (as specified in Measure 1.3).
Further, Signatories commit to notifying the AI Office of their Framework (as specified in Measure Measure 1.4).
Commitment 2 Systemic risk identification
Signatories commit to identifying the systemic risks stemming from the model. The purpose of systemic risk identification includes facilitating systemic risk analysis (pursuant to Commitment 3) and systemic risk acceptance determination (pursuant to Commitment 4).
Systemic risk identification involves two elements:
- following a structured process to identify the systemic risks stemming from the model (as specified in Measure 2.1); and
- developing systemic risk scenarios for each identified systemic risk (as specified in Measure 2.2).
Commitment 3 Systemic risk analysis
Signatories commit to analysing each identified systemic risk (pursuant to Commitment 2). The purpose of systemic risk analysis includes facilitating systemic risk acceptance determination (pursuant to Commitment 4).
Systemic risk analysis involves five elements for each identified systemic risk, which may overlap and may need to be implemented recursively:
- gathering model-independent information (as specified in Measure 3.1);
- conducting model evaluations (as specified in Measure 3.2);
- modelling the systemic risk (as specified in Measure 3.3);
- estimating the systemic risk (as specified in Measure 3.4); while
- conducting post-market monitoring (as specified in Measure 3.5).
Commitment 4 Systemic risk acceptance determination
Signatories commit to specifying systemic risk acceptance criteria and determining whether the systemic risks stemming from the model are acceptable (as specified in Measure 4.1). Signatories commit to deciding whether or not to proceed with the development, the making available on the market, and/or the use of the model based on the systemic risk acceptance determination (as specified in Measure 4.2).
Commitment 5 Safety mitigations
Signatories commit to implementing appropriate safety mitigations along the entire model lifecycle, as specified in the Measure for this Commitment, to ensure the systemic risks stemming from the model are acceptable (pursuant to Commitment 4).
Commitment 6 Security mitigations
Signatories commit to implementing an adequate level of cybersecurity protection for their models and their physical infrastructure along the entire model lifecycle, as specified in the Measures for this Commitment, to ensure the systemic risks stemming from their models that could arise from unauthorised releases, unauthorised access, and/or model theft are acceptable (pursuant to Commitment 4).
A model is exempt from this Commitment if the model’s capabilities are inferior to the capabilities of at least one model for which the parameters are publicly available for download.
Signatories will implement these security mitigations for a model until its parameters are made publicly available for download or securely deleted.
Commitment 7 Safety and Security Model Reports
Signatories commit to reporting to the AI Office information about their model and their systemic risk assessment and mitigation processes and measures by creating a Safety and Security Model Report (“Model Report”) before placing a model on the market (as specified in Measures 7.1 to 7.5). Further, Signatories commit to keeping the Model Report up-to-date (as specified in Measure 7.6) and notifying the AI Office of their Model Report (as specified in Measure 7.7).
If Signatories have already provided relevant information to the AI Office in other reports and/or notifications, they may reference those reports and/or notifications in their Model Report. Signatories may create a single Model Report for several models if the systemic risk assessment and mitigation processes and measures for one model cannot be understood without reference to the other model(s).
Signatories that are SMEs or SMCs may reduce the level of detail in their Model Report to the extent necessary to reflect size and capacity constraints.
Commitment 8 Systemic risk responsibility allocation
Signatories commit to: (1) defining clear responsibilities for managing the systemic risks stemming from their models across all levels of the organisation (as specified in Measure 8.1); (2) allocating appropriate resources to actors who have been assigned responsibilities for managing systemic risk (as specified in Measure 8.2); and (3) promoting a healthy risk culture (as specified in Measure 8.3).
Commitment 9 Serious incident reporting
Signatories commit to implementing appropriate processes and measures for keeping track of, documenting, and reporting to the AI Office and, as applicable, to national competent authorities, without undue delay relevant information about serious incidents along the entire model lifecycle and possible corrective measures to address them, as specified in the Measures of this Commitment. Further, Signatories commit to providing resourcing of such processes and measures appropriate for the severity of the serious incident and the degree of involvement of their model.
Commitment 10 Additional documentation
Signatories commit to documenting the implementation of this Chapter (as specified in Measure 10.1) and publish summarised versions of their Framework and Model Reports as necessary (as specified in Measure 10.2).
The foregoing Commitments are supplemented by Measures found in the relevant Transparency, Copyright or Safety and Security chapters in the separate accompanying documents. [Note: On this site shown as seperate sub-pages.]
I’ve been thrilled to see the support for the Safety & Security Chapter of the Code of Practice. Most frontier AI companies have now signed on to it: @AnthropicAI, @Google, @MistralAI, @OpenAI, @xAI
Why this is important: 🧵
1/6— Yoshua Bengio (@Yoshua_Bengio) August 1, 2025
More information about the Safety & Security Chapter, as well as the other Chapters of the Code of Practice, and a list of confirmed signatories: https://t.co/ALyT1HPeEj
6/6— Yoshua Bengio (@Yoshua_Bengio) August 1, 2025