Follow us

On 12 July 2024, the EU's Artificial Intelligence Act (AIA) was published in the EU's Official Journal and enters into force in August 2024, marking a pivotal moment in the global regulation of AI technologies.  The AIA will now begin to apply incrementally, with different elements taking effect over the coming months and years: companies should take this time to risk-assess and prepare for compliance with the new rules.

Scope of application

The AIA primarily imposes regulatory obligations on two types of operators: AI system providers and deployers.  An AI system provider essentially is the developer of the AI system or an entity that commissions its development and offers it in the market under their own name or trademark.  This can also include operators that modify and / or fine-tune a model.  An AI deployer, on the other hand, is the user of the AI system (except in cases where the AI system is used for personal non-professional activities).  It is crucial for operators to correctly identify whether they should be classified as providers or deployers of AI systems, as their regulatory obligations will be dependent on such classification.

In terms of its jurisdictional scope, the AIA can apply to operators located both within and outside the EU.  It covers any AI system placed on the EU market, put into service or used in the EU, or whose use may affect people in the EU in that the output from the AI system is to be used in the EU.

Classification of AI systems and related requirements

The AIA, broadly speaking, adopts a functional "risk-based" approach, which tailors the degree of regulatory intervention depending on the function of the AI, i.e. the use to which it is to be put.

At the most extreme end, AI systems that pose unacceptable risks to health, safety, and fundamental rights are prohibited altogether.  These include AI used for cognitive behavioural manipulation, emotion recognition in the workplace and educational institutions, and social scoring.

At the next level are the so-called "high-risk AI systems", which are considered to present significant potential risks to health, safety, and fundamental rights, and face the greatest degree of regulation under the AIA.  This category covers a number of specific standalone categories, including AI used in critical infrastructures, certain educational and vocational training applications, employment and workers management (subject to exceptions), and also AI systems that are products or safety components in products that are already subject to third-party conformity assessment requirements under sectoral EU regulation.  The high-risk AI systems are subject to the most stringent regulatory obligations, including requirements relating to datasets and data governance, documentation and record keeping, human oversight, robustness, accuracy and security, as well as conformity assessment for demonstrating compliance.

General purpose AI models (GPAIs), i.e. AI models that can be used for a variety of different tasks / applications, including foundation models, such as GPT-4, will face varying levels of regulation depending on whether or not they are considered as being of "systemic risk" (in line with the broader risk-based approach under the AIA).  Systemic risk will be presumed for these purposes for models trained using a total computing power of more than 1025 FLOPs (floating point operations).  However, the European Commission will also be able to designate models as being of systemic risk on the basis of other criteria indicating that the model has high impact capabilities, including the quality / size of the datasets, the input and output modalities of the model and its reach in terms of EU registered business users and registered end-users.

Systemic risk GPAIs are subject to greater regulatory obligations, including in relation to model evaluation and the assessment and mitigation of systemic risks, while the obligations on non-systemic risk GPAIs focus on transparency and technical documentation requirements.  All providers of GPAIs must also put in place a policy to comply with EU copyright law, in particular, concerning copyright holders' reservation of rights.

AI systems that are considered as being "limited risk" include those that interact with people without it being reasonably obvious that they are AI systems (e.g.  chatbots), and those that generate synthetic content (e.g.  deep fakes).  These systems are generally only subject to transparency obligations, such as watermarking to inform users of AI involvement.

Otherwise, all other AI systems, i.e. AI systems which do not fall under any of the above-mentioned categories are not subject to any specific regulatory obligations under the AIA.

It should be noted, however, that the AIA also provides for the development of voluntary codes of conduct, intended to foster the voluntary application of further requirements to AI systems that are only subject to limited or no requirements.  It remains to be seen how significant the impact of such codes may be.

Significant detail is yet to come

The AIA is clearly a voluminous piece of legislation, running at nearly 150 pages, and setting out elaborate regulatory requirements.  However, as is common with much EU legislation, the obligations are spelled out for the most part in terms of results, rather than operational or technical detail.  These specifics will be further set out and operationalised in "harmonised standards" to be adopted by the EU's standardisation bodies for the high-risk AI category obligations and "codes of practice" for the GPAI obligations to be adopted by the European Commission.  Work in relation to these further specifications is currently ongoing, with the aim of finalising them sufficiently in advance of the relevant obligations entering into force in order to give companies adequate time to prepare.

In addition, the European Commission is to adopt guidelines in relation to a number of key aspects of the AIA, including the definition of an "AI system" that is subject to the AIA, the prohibited AI system categories, the high-risk AI classification categories and the application of the high-risk AI requirements.  Again, the aim is for the European Commission to publish guidelines before the relevant obligations come into force, with the guidance on the definition of an "AI system" and the prohibited AI system categories being the immediate priority for the European Commission.

Enforcement and governance

The AIA is to be enforced at two different levels.  There will be centralised enforcement by the new "AI Office" (within the European Commission) in relation to GPAIs, while national supervisory authorities will enforce the other requirements, including those relating to high-risk AI systems and prohibited AI systems.  In addition, the AI Office and the AI Board — comprising representatives from national authorities, the European Commission, and the European Data Protection Supervisor — will coordinate the implementation of the AIA.

Enforcement will be backed up with the potential to impose substantial fines.  In particular, penalties for infringements related to prohibited AI systems may reach up to 7% of worldwide turnover.  Non-compliance with other obligations, including those related to high-risk and GPAIs, could incur fines of up to 3% of worldwide turnover.

Implementation timeline

The AIA will be implemented incrementally over the next few years, with the following key start-dates:

  • 2 February 2025 – ban of prohibited AI systems
  • 2 August 2025 – GPAI obligations
  • 2 August 2026 – most of the remaining obligations, including for the specific standalone high-risk AI categories and the obligations for the limited risk AI categories
  • 2 August 2027 – obligations in relation to high-risk AI systems that are products or safety components in products already subject to third-party conformity assessment requirements under sectoral EU regulation

What companies can now do to prepare

Companies should make the most of the implementation periods under the AIA to now risk-assess and prepare for compliance.  In particular, companies can consider the following steps:

  1. Conducting a thorough inventory audit to identify existing and envisaged future use of AI systems and assessing their risk classification under the AIA.
  2. Considering the application of the relevant AIA requirements against existing AI design / development processes and governance and performing a gap analysis to identify the key areas where changes will be required.
  3. Assessing the changes that may need to be made in relation to contracts with suppliers and customers in light of the relevant AIA requirements.  This may include agreeing upon relevant classifications and the parties' respective roles in the AI supply chain and seeking assurances / warranties in relation to compliance with the relevant requirements for the purposes of risk mitigation.
  4. Monitoring relevant developments that will further clarify the classification of their AI systems under the AIA and the relevant requirements, including the forthcoming guidelines from the Commission and the draft standards and codes of practice.

Related categories

Key contacts

Lode Van Den Hende photo

Lode Van Den Hende

Partner, Brussels

Lode Van Den Hende
Dr Morris Schonberg photo

Dr Morris Schonberg

Partner, Brussels

Dr Morris Schonberg
Nika Nonveiller photo

Nika Nonveiller

Associate, Brussels

Nika Nonveiller
Kian O'Connell photo

Kian O'Connell

Associate, Brussels

Kian O'Connell
Lode Van Den Hende Dr Morris Schonberg Nika Nonveiller Kian O'Connell