The EU AI Act (“Act”) aims to regulate AI systems within the EU, classifying them based upon risk levels and imposing different requirements depending on those levels of risk. Chapter V of the Act imposes significant obligations on providers of general-purpose AI (GPAI) models (albeit subject to the principle of proportionality, taking into consideration the size and resources of individual providers). GPAI models (such as ChatGPT-4) are highly versatile AI models which have been trained on vast quantities of data and which are capable of performing a wide range of distinct tasks. Recognising both the potential benefits and risks associated with these models (which can be used in many downstream AI systems across different sectors), the Act targets the providers of those models (in broad terms those who develop/commission their development, and place them on the EU market), requiring them to understand their models' capabilities and impact across the AI value chain.
The provisions of the Act are complex and, to assist providers of AI GPAI models to comply with their obligations under it, the European Commission’s AI Office, in consultation with over 1400 industry stakeholders, published the General-Purpose AI Code of Practice (“Code”) on 10 July 2025 (albeit two months later than planned due to substantive disagreements between stakeholders), in accordance with Article 56 (Codes of Practice) of the Act. Member States and the Commission are expected to formally endorse the Code by the end of this year although, as the provisions of the Act that apply to GPAI models came into force on 2 August 2025, the AI Office is actively encouraging providers to sign up to its provisions now, and on 1 August 2025 published a list of all current signatories.
On 18 July 2025, the Commission also published additional Guidelines to support the Code’s implementation. These provide further guidance as to how the Commission interprets key terms of the Act, the aim being to provide greater clarity on which organisations fall within its scope as providers of GPAI models (for example by clarifying key concepts such as ‘general-purpose AI model’, ‘provider of a general-purpose AI model’, and the ‘placing on the market’ of a GPAI model). Organisations which are yet to determine whether they qualify as providers of a GPAI model should consult these Guidelines.
What are the benefits of complying with the Code?
Although the Code is guidance and is not legally binding, compliance with the Code is intended to provide signatories with a number of benefits, including:
- Clarity, predictability and greater legal certainty: the Code aims to provide clear guidelines and standards for developing and deploying GPAI models, reducing both legal uncertainty and the administrative burden of compliance, whilst enabling providers to demonstrate a positive commitment to responsible AI
- Risk management: the Code's focus on systemic risk assessment and mitigation aims to help organisations to proactively identify and address potential harms associated with their GPAI models, and to minimise any subsequent reputational damage, financial losses, and legal liabilities
- A way to demonstrate compliance: while compliance with the Code will not, in itself, constitute ‘conclusive evidence’ of compliance with the Act, it has been designed to offer a ‘simple and transparent’ way of helping demonstrate compliance with the Act, and to enable the AI Office to more easily assess a provider’s compliance (for more on this see the Commission's FAQ on the Code).
Structure of the Code
The Code is divided into three chapters covering transparency, copyright, and safety and security, each containing a number of different measures for adoption, as described in more detail below.
The Transparency Chapter
The Act requires providers to have, maintain and make available to relevant stakeholders (the AI Office, national competent authorities and downstream providers) technical documentation and information about their GPAI models. More specifically, the Code contains three key measures for providers to adopt when placing a GPAI model on the market:
- Document all information referred to in the “Model Documentation Form”: this form (set out in the Transparency Chapter) provides a structured format for the essential information that must be provided to each category of stakeholder on request. It must be kept up to date and retained for at least 10 years from when the model is first placed on the market.
- Provide relevant information: signatories’ contact information must be made publicly available to enable relevant stakeholders to easily access the information contained in the Model Documentation (or other relevant information). Signatories must supply downstream providers with information contained in the most up to date Model Documentation (subject to appropriate confidentiality/IPR safeguards), and with any other information reasonably required to enable downstream providers to understand both the capabilities and limitations of the GPAI model.
- Ensure quality, integrity, and security of information: signatories must implement quality and integrity control, ensure documentation is retained as evidence of compliance with the Act, and protect the information from unintended alterations, in line with established protocols and technical standards.
The Copyright Chapter
This Chapter seeks to address concerns around the use of copyright material to train GPAI models, and the creation of infringing outputs, by requiring signatories to adopt a range of measures. Signatories are expected to:
- Establish a copyright policy: create, maintain, and implement a policy (reflective of the measures set out in the Chapter) to ensure compliance with EU copyright law for all GPAI models they release within the EU, and are encouraged to make a summary of this policy publicly available.
- Lawful data collection: reproduce and extract only lawfully accessible copyright content for training their models. This means not circumventing any technological protection measures in place to prevent or restrict unauthorised acts, and excluding websites that persistently and repeatedly infringe copyright and related rights on a commercial scale.
- Rights reservations: when using web crawlers, ensure that they adhere to the ‘Robots Exclusion Protocol’ (RFC 9309 and subsequent IETF-approved versions) and follow instructions for what they can and cannot access; identify and comply with other recognised machine readable protocols for expressing reservations rights; allow rightsholders to access publicly available information about their web crawlers, robots.txt features, and other compliance measures; ensure automatic updates are provided to affected rightsholders (e.g. via a web feed); and, if operating or controlling online search engines, ensure that compliance with rights reservations doesn't negatively impact the indexing of reserved content in their search engine results.
- Enable complaints: designate a point of contact for copyright-related issues and provide a mechanism for receiving and addressing complaints (but will not be required to act on manifestly unfounded complaints, or on duplicative complaints from the same rightsholder).
The measures outlined in this Chapter are not intended to change existing copyright laws, and it is ultimately up to signatories to verify that each measure complies with national copyright and related rights laws when implementing them in the relevant jurisdiction.
The Safety and Security Chapter
This chapter (which is the most comprehensive in terms of the range of measures advocated) applies only to the small number of providers likely to qualify as providers of high-impact GPAI models with “systemic risk”. These are GPAI models that, in simple terms, due to their high computing power and broad reach, have the potential to negatively affect public health, safety, public security, fundamental rights, or society as a whole. Providers are required to notify the Commission of these models within the timeframes specified in the Act. They must also demonstrate that the model’s particular characteristics ultimately preclude systemic risk, otherwise the Act’s provisions on GPAI models with systemic risk will apply. To the extent they do apply, this Chapter contains 10 measures (albeit high level principles as opposed to detailed technical standards) to enable compliance, with a focus on continuous risk assessment and mitigation throughout the model's lifecycle. Signatories are expected to:
- Safety and Security Framework: create, implement and regularly update a high level “Safety and Security Framework” containing systemic risk management processes for submission to the AI Office.
- Systemic Risk Identification: proactively identify systemic risks stemming from their models, using various data sources, including model-independent data and information gathered on similar models.
- Systemic Risk Analysis: assess identified risks using a range of measures (including systemic risk modelling), estimating harm probability and severity, and conducting post-market monitoring.
- Systemic Risk Acceptance Determination: define systemic risk acceptance criteria to determine acceptability. If risks are unacceptable, they should take appropriate measures (e.g. by implementing mitigations or withdrawing the model from the market).
- Safety Mitigations: implement appropriate safety mitigations, considering the model's release and distribution strategy (e.g. data filtering, input/output monitoring, and behavioural modifications).
- Safety and Security Model Reports: create and update “Safety and Security Model Reports” (including model details, risk analysis and mitigation, external reports, and information on material changes to the risks) and submit these to the AI Office.
- Systemic Risk Responsibility Allocation: define clear responsibilities across all organisational levels, allocate appropriate resource and promote a healthy risk culture
- Serious Incident Reporting: implement processes for tracking, documenting, and reporting relevant information to the AI Office and national competent authorities within specified timelines (and documentation must be retained for a prescribed period).
- Documentation and Transparency: maintain detailed documentation and publish summarised versions of their Framework and Model Reports as necessary.
Next steps for providers
Although Member States and the Commission still have to assess the adequacy of the Code, the Commission has stated that, in the 12 months following 2 August 2025, it will work closely with signatories, noting that if they do not fully implement all of the Code’s commitments immediately after signing, this will not, it itself, be deemed to be in violation of the Act. Instead, the AI Office will work with them to help achieve compliance. From 2 August 2026, the Commission will fully enforce compliance with all obligations placed upon providers of GPAI models under the Act, including by imposing fines (save in relation to models placed on the market before 2 August 2025, which have until 2 August 2027 to comply). With this in mind, several key steps can be taken by in-scope (or potentially in-scope) organisations now:
- Review the supporting Guidelines: review the Guidelines for further information on the scope of the Act and to whom it applies
- Allocate Preliminary Resources: allocate preliminary resources (personnel, budget and necessary external expertise) for ensuring compliance with the Code
- Catalogue and Classify: catalogue and classify all GPAI models currently used or under development within their organisation, identifying those that may be models with ‘systemic risk’ to provide a clear picture of the Code's applicability and help prioritise subsequent actions
- Prepare for Transparency: begin compiling the information required for the Model Documentation Form, to ensure their organisation can fulfil the transparency obligations when placing GPAI models on the market
- Review Copyright Practices: review current data collection and model training practices, focusing on copyright compliance and respect for rights reservations. Identify any immediate areas for improvement
- Safety and Security Framework: identify GPAI models with ‘systemic risk’ that will trigger obligations under the Code's Safety and Security chapter. Commence development of a Safety and Security Framework, ensuring compliance with all relevant measures
- Keep an eye on updates: due to the pace of AI technological change, the Code will require periodic updates. According to the Commission's FAQ on the Code the AI Office will review the Code at least every two years and update its provisions as required.
If you would like to more information about any of the matters raised in the above article, please contact Paul O’Hare, Elizabeth Lumb or Matt Clark.