On 10 July 2025, the European Commission published a voluntary Code of Practice for General-Purpose AI (GPAI) to help AI industry to prepare for the obligations for providers of GPAI models under the EU AI Act, which will be fully in force in 2026. You can read more on the EU AI Act here.
While this code is voluntary, it provides a clear route for demonstrating compliance with the EU AI Act’s requirements on safety, transparency and copyright, reflecting the EU’s aim to work with the industry to create a safe, accountable and innovation-friendly AI ecosystem.
What is GPAI?
Under the EU AI Act:
‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market’
About the code of practice
The code was developed through a multi-stakeholder process with experts, civil society, and major AI providers, including OpenAI, Microsoft, and Google.
It serves as a non-binding mechanism to help GPAI providers to meet their legal obligations under the EU AI Act, particularly under:
• Article 53 – for all GPAI providers
• Article 55 – for providers of GPAI models with systemic risk, meaning those with exceptionally high capabilities or wide-reaching impact
What do Articles 53 and 55 of the EU AI Act require?
Under the EU AI Act, Articles 53 and 55 set out specific obligations for providers of GPAI models:
Article 53
This article applies to every provider of GPAI model, regardless of size or risk level. It requires providers to:
- draw up and keep up-to-date detailed documentation on how their models were built, trained and tested, including their capabilities, limitations and intended use
- provide a sufficiently detailed, publicly available summary of the content used to train the models, according to a template provided by the AI Office
- put in place a policy to comply with EU copyright law, including rights reservations and ensuring lawful use of training data
- share relevant information with downstream developers and regulators on request, while protecting confidential information
Article 55
This article applies only to the most advanced GPAI models whose scale or capabilities could create systemic risk. Providers of these models must go further by:
- assessing and mitigating safety and security risks
- ensuring adequate level of cybersecurity protection throughout the model lifecycle
- documenting and reporting serious incidents promptly to regulators and taking corrective measures to address them
The new code maps directly into these articles, giving providers a framework to follow.
Who can sign the code?
Following the European Commission’s endorsement of the code, GPAI providers that voluntarily sign it can demonstrate compliance with the above obligations.
Any GPAI provider (or potential future provider) can sign the code, including those based outside the EU, as long as their models are made available within the EU. The list of confirmed signatories so far includes OpenAI, Google, Microsoft, Amazon, IBM and Meta.
Signing the code can provide more legal certainty and may reduce future compliance burdens, compared with proving compliance through other, more complex, means.
Structure of the code
The code is organised into 3 chapters, each addressing one key area of AI model governance:
1. TRANSPARENCY (Article 53)
This chapter ensures that GPAI providers clearly explain how their systems work, helping to meet EU AI Act transparency rules and giving regulators and downstream users the information needed to assess and use models responsibly.
The chapter asks providers to do three main things. They must:
- Document each model: detailing its development, training, capabilities, and limitations. They may do so by using the provided Model Documentation Form and they must keep it updated for at least 10 years.
- Share this documentation with system builders and EU regulators (AI Office) upon request, while protecting sensitive data such as trade secrets.
- Ensure accuracy and security: maintain information integrity following recognised established protocols and technical standards.
Providers are also asked to consider whether any parts of the documentation could be safely made public (via their website or by other appropriate means), to help promote openness in how AI is developed and used.
2. COPYRIGHT (Article 53(1)(c))
This chapter provides a framework for complying with EU copyright law.
Providers must:
- Maintain an up-to-date copyright policy, assigning responsibility within the organisation for the implementation and overseeing of the policy.
- Ensure training data is lawfully sourced, no bypassing paywalls or using sites known for infringement.
- Identify and honour rights reservations using state-of-the-art, machine-readable detection methods. This includes employing web-crawlers that read and follow instructions in accordance with the Robot Exclusion Protocol (robots.txt) and recognising other accepted machine-readable protocols or metadata for rights reservations, such as asset-based or location-based tags adopted as standards or widely used in relevant sectors.
- Share relevant technical information (e.g. crawler rules) with rights-holders to support oversight.
- Prevent copyright infringing outputs by taking reasonable technical measures and reflecting these safeguards in terms of use.
- Provide a contact point for copyright queries and a clear, fair complaints process.
3. SAFETY AND SECURITY (Article 55 – systemic models only)
This chapter only applies to providers of GPAI models with systemic risk due to scale, capability, or reach. Code signatories commit to creating, implementing and regularly updating a state-of-the-art Safety and Security Framework to prevent, detect and mitigate risks.
The framework should include:
- Risk assessment: providers should conduct ongoing evaluations, including adversarial and real-world testing to identify vulnerabilities and harmful behaviours before deployment
- Incident reporting: providers should notify the AI Office or relevant authorities promptly when serious safety or security issues arise, enabling timely oversight and corrective action.
- Security by design: providers should integrate robust cybersecurity measures across the model lifecycle to guard against threats such as data poisoning, model theft, and adversarial attacks.
- Misuse detection and mitigation: providers should monitor for signs of malicious use and take proportionate action to limit harm.
- Clear accountability: providers should assign named individuals or teams responsible for safety and security governance, ensuring rapid, coordinated responses to emerging threats.
By following these measures, providers can demonstrate proactive management of systemic risks and align with the highest standards set out in the EU AI Act.
Why it matters for UK businesses?
Although the UK is not bound by the EU AI Act, its reach extends far beyond the EU’s borders. Many UK companies already develop, integrate, or deploy GPAI models in products and services used within the EU or partner with organisations that do. In practice, this means UK providers, distributors, and integrators will increasingly encounter models built to meet the code’s transparency, copyright, and safety requirements.
With global momentum behind AI regulation, adopting compatible policies and due diligence now can provide a competitive edge, ensure smoother cross-border operations, and position UK businesses ahead of the curve should the UK adopt similar frameworks in the future.
Looking ahead
The Code of Practice for General-Purpose AI marks a significant milestone in the EU’s phased approach to AI regulation. Although voluntary, the code offers early clarity and structure for how GPAI providers can comply with key provisions of the EU AI Act, well in advance of its full application in 2026.
The code is a ‘living document’ that will evolve with technology, regulation, and international best practice, potentially becoming a global standard, especially in areas like transparency and copyright. Businesses should treat it as part of ongoing governance, not a one-off compliance task.
Bottom line: The GPAI code offers early clarity for meeting EU AI Act standards, helping providers and users reduce risk, demonstrate responsibility, and prepare for future regulation.
Conclusion
The publication of the code for GPAI is a major development in the EU’s AI regulatory rollout. It reflects a balanced, collaborative approach, giving industry a structured, flexible tool to start aligning with new legal expectations well before enforcement begins. For AI providers, signing the code can offer legal certainty, risk mitigation, and public credibility. For AI users and downstream businesses, it offers guidance on safe and ethical adoption.
At EM Law, we are experts in the legal issues surrounding AI, personal data and intellectual property. If you have questions about how these developments affect your business, your works or your use of AI tools, please do not hesitate to get in touch with our team here.