Whether you’re using ChatGPT to organise your email inbox or developing the latest ‘agentic’ AI models – AI regulation has arrived in the EU, taking the form of the EU Artificial Intelligence Act (AI Act).
The AI Act builds on a number of existing EU laws and, further, introduces entirely novel concepts applicable to both ‘deployers’ and ‘providers’ of AI technologies. More on this below.
It culminates a multi-year effort to lay down guidelines for the use of AI within the EU by both public and private bodies; outlawing certain practices altogether and setting strict guidelines around other forms of AI. Like the General Data Protection Regulation (GDPR) and its implications for the use of personal data, the AI Act will likely amount to the guiding piece of legislation for the use of AI around the world. Individuals and organisations in the UK and elsewhere will be both affected by its implications and be required to comply with its provisions.
The AI Act came into force on 1 August 2024. Its most restrictive provisions covering the highest risk forms of AI poses apply from this month (2 February 2025). However, the bulk of the legislation will not apply until 2 August 2026 or 2 August 2027.
In this blog, we provide a general overview of the AI Act. We discuss its scope and set out its requirements on certain individuals and organisations that fall within it.
TABLE OF CONTENTS
- Important definitions
- Who does the AI Act apply to?
- What is the territorial application of the AI Act?
- When does the AI Act not apply?
- A risk-based approach of the AI Act
- What are the obligations applicable to AI systems?
- What are the obligations applicable to general-purpose AI models?
- Penalties
- What is and isn’t in force?
- Complying with the AI Act and other laws
Important definitions
The AI Act focuses on two key definitions of AI that fall within its scope:
An ‘AI system’ means ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.
This definition covers essentially the common understanding of what an AI tool is: AI powered chatbots, image, text and video generators, AI agents, risk management tools and so on. Your favourite AI art generator is an AI system.
The EU Commission has published further guidance around the definition, breaking it down into parts.
A ‘general-purpose AI model' is an ‘AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applicants, except AI models that are used for research, development or prototyping activities before they are placed on the market’.
These are your underlying models that can (but do not necessarily have to) power AI systems for a wide variety of purposes – GPT-4o, o3-mini, Llama 3, Gemini 2.0 and so on.
The AI Act also includes a related definition of ‘general-purpose AI system’ which is an AI system based on a general-purpose AI model. In other words, where there is reference to an AI system in the AI Act, that includes AI systems that rely on general-purpose AI models, for example, ChatGPT.
There is interplay between these two definitions (a wide variety of AI systems use GPT-4o to function, for example, but many do not have a general-purpose application), however, in short, AI as is commonly used by individuals and organisations around the world fall within the scope of the AI Act.
But, as we discuss below, different levels of obligations apply to AI systems and general-purpose AI models.
Who does the AI Act apply to?
AI Act applies to:
A deployer or a provider.
- Deployer: ‘a natural or legal person (an individual or an organisation), public authority, agency or other body using an AI system under its authority…’
- Provider: ‘a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.’
An importer or distributor.
- Importer: ‘a natural or legal person located or established (in the EU) that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.’
- Distributor: ‘a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the (EU) market.’
the AI Act also applies to importers or distributors who make available an AI system to the market under the name of another organisation or individual (who would likely be the deployer in that scenario).
A product manufacturer.
- Product manufacturer: not explicitly defined in the AI Act, however, this would cover organisations and individuals that develop certain types of products that are subject to EU safety legislation under their own name or trademark.
There is interplay between providers and deployers. The AI Act envisions scenarios where providers can become deployers and deployers, providers. The key point is that there is no hard barrier between the two concepts. Between importers, distributors and providers, there can also be conceivable issues with white labelling and responsibility for compliance where the AI system is being marketed in the EU under another name by the importer (who may in that scenario become the provider). This is an area where forthcoming EU guidance will be extremely important.
What is the territorial application of the AI Act?
AI Act applies to the above individuals or organisations under following circumstances:
1. Deployers that have their place of established or are located within the EU.
2. Providers, irrespective of where they are located in the world, that place an AI system or general-purpose AI model onto the EU market.
3. Providers or deployers of an AI system, irrespective of where they are located in the world, where such AI system’s output is used within the EU.
This third criterion is the widest category, and it is likely to be subject to further guidance from the EU. For example, providers based in the US who are working with US customers, who then go on to use the output of the provider’s AI system (even inadvertently) within the EU, may be caught.
4. Importers and distributors that place or make available an AI system on the EU market.
5. Product manufactures that put an AI system, together with a relevant product, on the EU market.
For the purposes of the territorial applicability, the AI Act will also apply to Norway, Iceland and Lichtenstein under the European Economic Area (EEA) framework.
Effectively, any user or developer of an AI system or general-purpose AI model, whether or not they are based in the EU, might be required to comply with the AI Act if the AI (or its output) is connected to or used within the EU in any way.
When does the AI Act not apply?
There are various carve outs to the AI Act’s applicability. The most important are as follows:
- Individuals that use AI systems for personal use are not required to comply with the AI Act.
- AI systems or general-purpose AI models (including the output of such AI systems or models) used for scientific and/or development purposes do not fall within the AI Act.
- Deployers or providers that test AI systems or general-purpose AI models prior to such systems or models being put on the market or put into service is not within the scope of the AI Act, unless that testing is ‘real world’ testing.
- AI systems used exclusively for the purpose of the military and national defence.
A risk-based approach of the AI Act
Risk levels – AI systems
The AI Act differentiates between 4 levels of risk for AI systems. The level of risk dictates the restrictions and/or obligations that apply:
Unacceptable risk
AI systems whose activities pose too much of a risk to be used within the EU and are therefore prohibited.
The prohibited categories of AI systems are as follows:
- AI systems that engage in ‘subliminal, purposefully manipulative or deceptive techniques’ that causes or is likely to cause significant harm to another person.
- AI systems that exploit vulnerable people, insofar as the AI system materially distorts the vulnerable person or is reasonably likely to cause the vulnerable person or another person significant harm.
- AI systems for social scoring.
- AI systems for predictive policing.
- AI systems that are part of or are used to create facial recognition databases.
- AI systems that can be used to infer emotions in the workplace or in an educational setting (unless used for medical or safety reasons).
- AI systems that use special category personal data for biometric categorisation purposes.
- AI systems that can use biometrics to identify individual in real time by law enforcement (although there are exceptions).
The EU Commission has recently issued guidance exploring the types of AI systems that could fall within the unacceptable risk category.
High risk
AI systems that present a high-risk to the rights and freedoms of individuals within the EU and need to be carefully monitored. The bulk of the AI Act’s requirements fall on high-risk AI systems.
These AI systems should be considered high-risk if:
- Annex I systems: An AI system that is part of a safety component subject to certain EU safety and product legislation. The applicable legislation is set out in Annex I. This is the category that is mainly applicable to product manufacturers.
- Annex III systems: Other specified high-risk AI systems are listed in Annex III, subject to certain exemptions:
a) Biometrics
b) Critical infrastructure
c) Education and vocational training
d) Employment, workers management and access to self-employment
e) Access to and enjoyment of essential private systems and essential public services and benefits (matters such as insurance, benefits, credit or risk assessment or pricing)
f) Law enforcement
g) Migration, asylum and border control management
h) Administration of justice and democratic processes
Importantly, Annex III high-risk AI systems are not considered high-risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural person, including by not materially influencing the outcome of decision making. The latter can be the case if one or more of the following criteria are met:
- An AI system is intended to perform a narrow procedural task.
- An AI system is intended to improve the result of a previously completed human activity.
- An AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review.
- An AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the high-risk categories set out in Annex III.
It remains to be seen whether further guidance from the EU (expected in 2026) will clarify the inclusions and exclusions, as in many circumstances two individuals could come to two different conclusions.
Limited risk
These are AI systems that can be used freely with minimal restrictions but that have specific transparency obligations.
The vast majority of AI systems will fall within this category:
- AI systems that directly interact with individuals (chatbots).
- AI systems (which also include general-purpose AI systems) that generate or manipulate text, image, audio or video content.
- AI systems that generate or manipulate text which is published with the purpose of informing the public on matters of public interest.
- AI systems that generate or manipulate image, audio or video that constitutes a deep fake (a deep fake ‘being content that resembles existing persons, objects, entities or events and would falsely appear to a person to be authentic or truthful’)..
- Emotion recognition systems.
- Biometric categorisation systems.
Risk levels – General-purpose AI models
There is a core set of obligations that apply to all providers of general-purpose AI models (see below). There are additional obligations that apply to providers in respect of general-purpose AI models that pose a systemic risk to individuals within the EU.
General-purpose AI models will be viewed under the AI Act as having a systemic risk where it has:
- High impact capabilities (a model with the computational power greater than 10^25 FLOPs (floating point operations), or a model that is designated as having high impact capabilities by the EU)
- It has an equivalent impact or capabilities of a model with computational power greater than 10^25 FLOPs, taking into account the following criteria:
a) the number of parameters within the model
b) the quality or size of the data set
c) the amount of computational power
d) the type of input or output the model uses
e) applicable benchmarks
f) whether the model has an extensive reach (which shall be presumed by the EU where the general-purpose AI model has at least 10,000 ‘registered business users’ in the EU)
g) the number of end users within the EU
What are the obligations applicable to AI systems?
All AI systems
Providers and deployers of AI systems are required to take ‘measures’ to ensure that their staff and other personnel that use or develop AI systems are AI literate. The level of risk does not matter.
High-risk AI systems
The majority of the obligations fall on providers and deployers of high-risk AI systems.
A short summary is set out below (broken down by the relevant role an individual or organisation could have when developing or using an AI system).
Providers
Providers of high-risk AI systems will be obligated to:
1. Establish, implement, document and maintain a risk management system. This is envisaged as being a ‘continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating’ and be comprised of the following steps:
- ‘the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose’
- ‘the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse’
- ‘the evaluation of other risks possibly arising’ which will involve a post-market monitoring system and the data gathered from it (see further below)
- ‘the adoption of appropriate and targeted risk management measures designed to address the risks identified’
The central idea is that the AI system is tested in a rigorous way. Such tests ‘shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system’ to guard against the identified risks and to ensure that risk mitigation measures work in practice.
2. Institute data management and governance systems over the high-risk AI system. This comprises all relevant datasets, including training data, input and output data on an ongoing basis.
Importantly, there is an express interplay with the GDPR in this context. It is only permissible to use special category personal data for training, validation and testing high-risk AI systems where it is strictly necessary for bias detection and correction. This restriction is less than what is currently legally possible under the GDPR.
3. Produce technical documentation that demonstrates how the high-risk AI system functions. This includes:
- A general description of the high-risk AI system.
- A detailed description of its elements and the process for its development. Detailed information as to how the AI system is controlled. A description of the appropriateness of performance metrics .A detailed description of the risk management system (see above). A description of the relevant changed made by the provider during the high-risk AI system’s development.A list of all applicable standards to the AI system.
- A detailed description of the post-market measures and systems in place to measure how the high-risk AI system operates and is used when it is deployed.
4. Ensure that the high-risk AI system is capable of recording events and usage logs over its lifetime.
5. Ensure that the operation of the high-risk AI system is transparent to the extent that the deployer can interpret and use its output. This includes providing usage and instruction documentation for each deployer – which must include (in summary):
- ‘The identity of the provider and, where applicable, its authorised representative’ (see below)
- ‘The characteristics, capabilities and limitations of performance of the high-risk AI system’
- ‘The changes to the high-risk AI system and its performance which have been predetermined by the provider’
- ‘the human oversight measures’ (see below)
- ‘the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures’
- ‘A description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret’ its event and usage logs
6. Design, develop and maintain the high-risk AI system in such a way that ensures human oversight.
7. Design, develop and maintain the high-risk AI system in such a way that ensures accuracy, robustness and cybersecurity of the high-risk AI system.
8. Provide relevant information to end users, including the provider’s name, trade name and contact address.
9. Ensure that the high-risk AI system complies with all applicable accessibility requirements.
10. Implement a quality management to ensure that the provider itself complies with the AI Act on an ongoing basis.
11. Keep a record its technical documentation, quality management system documentation and other relevant documents for a period of 10 years from the date that the high-risk AI system is deployed.
12. Take corrective actions (including steps such as recalls, disabling access or withdrawing the high-risk AI system from sale) where the provider considers, or has a reason to consider, that the high-risk AI system is not in compliance with the AI Act.
13. Cooperate with all the relevant authorities.
14. For providers established or located outside of the EU, appoint an authorised representative (akin to the current requirement under the GDPR).
15. Carry out a detailed conformity assessment prior to deployment to ensure that all aspects of the AI Act have been complied with. There are specific protocols that the provider must follow, depending on the category of the high-risk AI system they intend to deploy.
16. Develop a ‘declaration of conformity’ which is a statement that the high-risk AI system complies with all applicable law and contains specific information about the provider. This needs to be retained and provided to a competent authority upon request.
17. Affix a CE marking to the high-risk AI system. This CE marking is a common practice for physical products being made available on the EU market.
18. Register themselves on an EU database of high-risk AI systems.
19. Carry out post-market monitoring, which will ‘actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems’ with the AI Act.
20. Report all serious incidents pertaining to the high-risk AI system. A serious incident is one that leads to:
- Death or serious damage to an individual’s health
- Serious and irreversible disruption of critical infrastructure
- A breach of EU law that protects fundamental rights and freedoms
- Serious damage to a property or the environment
The relevant authority must be notified no later than 15 days after the provider becomes aware of the incident. There are shorter timescales in the event of serious and irreversible disruption of critical infrastructure or a death of a person.
Deployers
Deployers have less intensive obligations where they are deploying high-risk AI systems. Their obligations include:
1. Complying with the instructions of the provider and ensure that such technical and organisational measures are in place within the deployers organisation to ensure that they can comply. These instructions are the instructions that the provider is required to produce and provide to the deployer, not day to day instructions about other matters.
2. Ensuring human oversight of the high-risk AI system.
3. Where the deployer has control over the input data, ensuring that the input data is relevant and applicable to the high-risk AI system.
4. Carry out ongoing monitoring of the high-risk AI system.
5. Maintain the logs generated by the high-risk AI system.
6. Be transparent with employees where any personnel are impacted by the deployment of a high-risk AI system.
7. Carry out a data protection impact assessment if they will process personal data in connection with the high-risk AI system.
8. If the high-risk AI system makes decisions applicable to individuals, ensure that relevant individuals are informed that they are subject to the use of an AI and respond to any request from a person subject to their decision to explain how that decision was reached.
9. Cooperate with relevant authorities.
10. If the high-risk AI system is evaluating creditworthiness of individuals or assessing them in the context of life or health insurance, carry out a ‘Fundamental Rights Impact Assessment’ for the high-risk AI system. Deployers are able to rely on any prior Fundamental Rights Impact Assessment carried out by the provider if it has used the system in practice. There is a requirement to notify the relevant authority that an assessment has been carried out.
Product Manufacturers
Product manufacturers are essentially equivalent to providers. The obligations applicable to providers will apply to product manufacturers, except that the necessary analysis and corresponding documentation can be integrated with the existing documentation they are required to hold and submit to the applicable EU or member state body responsible for certifying that their product is safe.
Importers
Amongst other requirements, prior to placing the high-risk AI system on the market, an importer must ensure that the high-risk AI system complies with the AI Act by checking that:
- The conformity assessment has been carried out by the provider
- The provider has produced the relevant technical documentation pertaining to the high-risk AI system
- The high-risk AI system bears the CE marking
- That the provider has appointed an authorised representative, if applicable.
The importer cannot put the high-risk AI system on the market in the EU without ensuring that it complies with the AI Act. If it falls into non-compliance, the importer will be required to notify the relevant parties (including the relevant authority).
The importer is, under the AI Act, to act as a verification system and it is clear that it retains liable under the AI Act if the high-risk AI system posed a significant risk to the rights and freedoms of individuals within the EU.
Distributors
Distributors have similar obligations to importers, save that they are also required to verify the importer’s compliance with the AI Act. It is evident that the final point of sale bears the responsibility cross the supply chain.
Importantly, the distributor has an additional obligation to an importer to bring the high-risk AI system into compliance with the AI Act and take ‘corrective actions’ to ensure its compliance.
Limited risk AI systems
Where providers and deployers develop certain types of limited risk AI systems (see above), they will be required to be transparent about certain aspects of the AI system and comply with any additional requirements applicable to the type of limited risk AI system in question.
The informational requirements applicable to the AI system must be provided in a ‘clear and distinguishable manner’ and at the time of the first interaction or exposure to the relevant AI system (or its output) or earlier.
Providers
- Providers of an AI system that is intended to directly interact with individuals must make it clear that they are interacting with an AI, unless it would be obvious to the relevant individual.
- Providers of AI systems, including general-purpose AI systems, that generate or manipulate audio, image, video or text content shall ensure that such content is ‘detectable’ as AI generated.
This is one of the key obligations in the AI Act (as generative AI is the most common type of AI) and is likely to be the most expansive and important. There is a further requirement on providers to ensure that the relevant solutions are ‘effective, interoperable, robust and reliable as far as this is technically feasible.’
How this requirement is to function is still under debate.
Deployers
- Deployers of ‘emotion recognition and biometric categorisation systems’ must tell individuals exposed to them that they are operating and ensure that any related personal data processing is done in compliance with the UK GPDR.
For example, deployers of CCTV cameras with AI capacity would be required to display information (with their standard CCTV notices) about the use of the AI system.
- Deployers of AI systems that generate or manipulate image, audio or video content that constitutes a deep fake: ‘shall disclose that the content has been artificially generated or manipulated’.
Where the deep fake is ‘evidently’ artistic, creative, satirical or fictional the requirement is reduced to a disclosure in such a way that does not ‘hamper the display or enjoyment of the work’.
- Deployers of an AI system that generates or manipulates text for the purpose of informing the public on matters of public interest will be required to disclose that the text has been artificially generated or manipulated. This does not apply to text that has ‘undergone’ a human review. It remains to be seen how much human review is necessary for this requirement to apply.
What are the obligations applicable to general-purpose AI models?
Providers of general-purpose AI models are required to comply with the AI Act. The level of compliance depends on whether the general-purpose AI model presents a systemic risk or not (see above).
All models
Providers of all models are required to:
- Produce the technical documentation pertaining to the model (including its training and testing process and the results of its evaluation) – see the discussion above pertaining to the relevant requirement on providers more generally.
- Produce all necessary information and documentation to providers of AI systems that will integrate the general-purpose AI model into their AI systems. The documentation will likely need to cover all the documentation providers are required to keep more generally (see above), and, in addition, to be sufficient to enable providers of downstream AI systems to understand the general-purpose AI model and comply with their obligations under the AI Act.
- Develop and adhere to a policy to comply with copyright and related intellectual property right laws.
- Produce and make publicly available a ‘sufficiently detailed summary about the content used for training of the general-purpose AI model’.
The above obligations do not apply to providers who are making their general-purpose AI model available on an open-source basis.
Providers based abroad will also have to appoint an authorised representative in the EU.
General-purpose AI models with systemic risk
In addition to the obligations on all providers of general-purpose AI models (even if systemically risky model is made available on an open source basis), providers of general-purpose AI models with systemic risk are required to:
- Perform a ‘model evaluation’ which would include adversarial testing to identify and mitigate systemic risks.
- Assess and mitigate possible systemic risks and those that would result from the use of the model.
- Keep track of serious incidents related to the model and address them.
- Maintain an adequate level of cybersecurity protection or the general-purpose AI model.
- Notify the relevant authority within the EU that their model meets the criteria of having a systemic risk within 2 weeks of discovery.
Penalties
The AI Act will be enforced by significant penalties:
- Any person or organisation that develops or uses a prohibited AI will be subject to civil fines of €35 million or 7% of an organisation’s worldwide annual turnover.
- Non-compliance with the remaining obligations can lead to civil fines of €15 million or 3% of an organisation’s worldwide annual turnover.
- The supply of incorrect or misleading information in response to a request from an authority under the AI Act can lead to civil fines of €35 million or 7% of an organisation’s worldwide annual turnover.
SMEs will be subject to the lower sum, whilst larger organisations will be subject to the higher sum.
What is and isn’t in force?
The provisions pertaining to prohibited AI systems are now in force (from 2 February 2025). So are the obligations pertaining to AI literacy.
The obligations on providers of general-purpose AI models will be in force from 2 August 2025.
Most other provisions will be in force on 2 August 2026, including the requirements on providers of some high-risk AI systems and limited risk AI system transparency requirements.
The obligations applicable to AI systems integrated with regulated products will be enforced from 2 August 2027.
Complying with the AI Act and other laws
If you are a UK business looking to provide or deploy an AI system in the EU, then you need to be prepared for the AI Act.
Furthermore, as touched on above, the GDPR and other existing rules will already apply to the development and use of AI – whether or not the AI system falls within the AI Act.
At EM Law, we are at the cutting edge of legal developments in this area and have assisted many clients in the UK and internationally with developing and deploying AI systems. If you have any questions about AI or AI compliance, please don’t hesitate to contact us here.