Follow us


The EU adopted in 2024 the world's first AI-specific comprehensive legal framework – the AI Act. The AI Act takes a risk-based functional approach, under which the degree of regulation depends on the potential risks that certain AI functions may have for human health, safety and human rights. It is supplemented by civil liability and product safety legislation, which provide additional safeguards ensuring safe AI use. Finally, the European Commission (EC) has been proactively exploring the application of competition law rules to the AI sector, recognising the importance of maintaining a competitive and fair market environment as technology and digital markets continue to evolve.

The EU’s approach to artificial intelligence was initially set out in 2018 in the European AI Strategy, following which the European Commission presented its 2021 AI package that consisted of:

  • The EC's Communication on fostering a European approach to AI.
  • A review of the coordinated plan on artificial intelligence, with EU Member States.
  • A Proposal for a regulatory framework on artificial intelligence which has since been adopted.

The Commission further launched an AI innovation package that sets out measures to support European startups and SMEs in the development of trustworthy AI.

A key pillar of the European AI Strategy is a human-centric and trustworthy AI ecosystem which creates a safe and innovation-friendly environment for users, developers, and deployers. To contribute to building of trustworthy AI, the Commission has proposed three key interrelated legal initiatives:

The Commission's 2025 Work Programme outlines its ambition to boost competitiveness, enhance security and bolster economic resilience in the EU. This includes a series of Omnibus packages designed to simplify EU policies and laws, which EC Vice-President for Tech Sovereignty, Security and Democracy announced would include a package intended to address the overlap between the EU AI Act, the Digital Services Act, the Digital Markets Act and the General Data Protection Regulation (GDPR). A proposal for a Cloud and AI Development Act was also announced, as part of the AI Contingent Action Plan which aims to capitalise on the opportunities provided by AI.

Helpful resources include the European Commission's website on European approach to artificial intelligence which provides comprehensive information about the EU's strategy and policies on AI, as well as the important milestones.

The AI Act is the EU's horizontal regulatory framework in relation to all AI systems.  Broadly speaking, it adopts a functional "risk-based" approach, with the degree of regulatory intervention depending on the function of the AI - the use to which it is to be put. The regulatory obligations are imposed on providers (developers) and deployers (users) of AI systems and apply to operators located both within and outside the EU, so long as the output from the AI system is used in the EU.

The different regulatory categories under the AI Act are as follows:

  • Prohibited AI systems: AI systems that are considered to involve unacceptable risks to health, safety and fundamental rights are prohibited outright. This includes, for example, AI used for cognitive behavioural manipulation, inferring emotions in the workplace and education institutions, and social scoring. On 4 February 2025, the Commission provided guidelines on prohibited AI practices as defined by the AI Act.
  • High-risk AI systems: AI systems that are considered to involve significant potential risks to health, safety and fundamental rights, including AI used in critical infrastructures, certain educational and vocational training applications, employment and workers management - and AI systems that are products or safety components in products which are already subject to third-party conformity assessment requirements under sectoral EU regulation. These are subject to the greatest degree of regulation under the AI Act, including requirements about datasets and data governance, documentation and record keeping, human oversight, robustness, accuracy and security, conformity assessment for demonstrating compliance.
  • Limited risk AI systems: AI systems that interact with people where it is not reasonably obvious that they would be interacting with an AI system, and AI systems that generate synthetic content or manipulate content (including deepfakes). These are generally only subject to transparency obligations and water-marking to ensure people are aware that an AI system is being used.
  • General purpose AI models (GPAI): foundation models such as OpenAI's GPT-4 are subject to varying degrees of regulation, depending on whether they are designated as being of "systemic risk". This depends, among other things, on the level of compute used to train the model. Systemic risk GPAI models are subject to material regulatory obligations in some ways akin to the high-risk category, while non-systemic risk GPAI models are subject to fewer obligations - focusing more on transparency.
  • Minimal risk AI systems: this covers all other AI systems are not subject to specific regulatory obligations under the AI Act.


From 2 February 2025, all providers and deployers of AI systems have been obliged under the AI Act to ensure a sufficient level of AI literacy of their staff dealing with those AI systems. The EU AI Office published a living repository of AI literacy best practices on 4 February 2025.

The AI Act is a voluminous piece of legislation. However, the obligations are articulated for the most part in terms of results, rather than operational or technical detail. These specifics will be further set out in due course, including for:

  • High-risk AI obligations, where "harmonised standards" are to be adopted by the EU's standardisation bodies.
  • GPAI obligations, where "codes of practice" are to be prepared (and potentially adopted by the EC) in line with this tentative timeline. The third draft of the GPAI Code of Practice was published in March 2025. Further information on the GPAI obligations and the Code of Practice can be found in this Q&A by the EU AI Office.

In order to further clarify the application of the AI Act in practice, the AI Office regularly hosts the "AI Pact Events", a full list of which can be accessed here.

Implementation timeline

While the AIA was adopted in mid-2024, it will be implemented incrementally over the next few years, with the following key start-dates:

  • 2 February 2025 – ban of prohibited AI systems and AI literacy requirements
  • 2 August 2025 – GPAI obligations
  • 2 August 2026 – most of the remaining obligations, including for the specific standalone high-risk AI categories and the obligations for the limited risk AI categories
  • 2 August 2027 – obligations in relation to high-risk AI systems that are products or safety components in products already subject to third-party conformity assessment requirements under sectoral EU regulation


Read our key takeaways on the AI Act here.

From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.

The European General Data Protection Regulation (GDPR) is the EU's framework governing the collection and processing of personal data in the EU. AI systems – whether deployed by an AI provider within or outside of the EU – must comply with the requirements set out in the GDPR if they fall within the scope of the scope of the legislation when collecting and processing the personal data of individuals.

The GDPR grants individuals several important rights, including rights of access, to be forgotten, to object, to rectification, to restrict processing, to data portability, and not to be subject to decisions based solely on automated processing. These ensure that data subjects have control and transparency over how their personal data is used.

The European Data Protection Board (EDPB) helps to ensure that the GDPR is applied consistently between Member States' Data Protection Authorities and facilitates cooperation in enforcement.

  • On 17 July 2024, the EDPB issued a statement calling on Member States to designate Data Protection Authorities as Market Surveillance Authorities under Europe's AI Act, with a deadline of 2 August 2025.
  • In 2024, the EDPB formed a task force to investigate data processing by OpenAI’s ChatGPT and, on 12 February 2025, expanded its scope to include the Chinese AI platform DeepSeek. A quick response team is being set up to guide and coordinate enforcement actions across different Data Protection Authorities whenever concerns arise related to AI data processing.

The European Data Protection Supervisor (EDPS), established under the GDPR, is responsible for monitoring how institutions and bodies subject to the GDPR process personal data. Within the scope of the AI Act, the EDPS will serve as the competent authority, supervising institutions and bodies subject to the GDPR to ensure their AI activities align with data protection principles and rules.

The (revised) Product Liability Directive establishes a specific type of liability for certain types of damage – strict liability (ie, non-fault based) for defective products causing death, personal injury or damage to personal property.

  • The Directive provides for strict liability in relation to defective "products" and defective "components" (that cause their product to be defective).  Both are defined in a way that would include AI systems (namely, "software" and "intangibles"), meaning the Directive applies both to standalone AI systems and AI-enabled products – where the AI system is a component of the product (such as a self-driving car).
  • The Directive sets out rebuttable presumptions of defectiveness and causation, including where the claimant faces excessive difficulties due to technical or scientific complexity. It also establishes new rules in relation to disclosure of evidence.
  • It entered into force on 8 December 2024. EU countries will have until 9 December 2026 to transpose this directive into national law.

In 2022, the EC had proposed the AI Liability Directive, which focused on providing procedural facilitations for claims related to AI systems under fault-based civil liability regimes in Member States' national laws in relation to the disclosure of evidence and rebuttable presumptions, in particular in relation to establishing the causation element. However, after a very slow progress through the legislative procedure, the EC withdrew the proposal in February 2025, citing a lack of "foreseeable agreement". The EC indicated it would consider whether to adopt another proposal in the future.

The General Product Safety Regulation outlines the EU's new general framework for the safety of non-food consumer products. While there are certain doubts as to the precise scope of the Regulation, the EC has indicated in its Q&A published on 27 November 2024 that it considers the Regulation to apply to "all types of products, including […] software". The Regulation entered into force on 12 June 2023 and has applied since 13 December 2024.

The European Commission – in its capacity as the EU's competition authority – has identified technology and digital markets as a key sector and has made clear that it is committed to ensuring that these markets are competitive and contestable by using tools available to it such as antitrust enforcement, merger control, and the Digital Markets Act.

  • In July 2024, the Commission – along with the US Department of Justice and Federal Trade Commission, and the UK's Competition and Markets Authority – issued a joint statement on "Competition in Generative AI Foundation Models and AI Products". It outlined certain risks that AI poses to competition, including concentrated control of key input such as chips, entrenching or extending market power in AI-related markets by incumbent tech companies with an established presence in the AI supply chain, and partnerships and financial investments between AI startups and established tech companies to access (for example) cloud computing power.
  • In September 2024, the Commission issued a policy brief on "Competition in Generative AI and Virtual Worlds" which outlines the state of competition in the AI sector and sets out the Commission's concerns and possible theories of harm. These include, most significantly, risk that incumbent large digital players may limit access to key computing infrastructure such as cloud capacity, and risk that the large players offering AI foundation models may "use their market power to limit choice or distort competition in downstream markets" – for example, by leveraging behaviour, tying, refusal to supply, and so on. To ensure that the AI sector remains competitive and contestable, the Commission is taking active steps by monitoring:
  • Vertical agreements, between Original Equipment Manufacturers and AI developers for the pre-installation of AI models on mobile devices which may potentially make it more "difficult for other foundation models to be accessed or pre-installed on those devices" (see policy brief at page 6).
  • Mergers and acquisitions. The Commission has been exploring the possibility of reviewing transactions which do not meet the EU merger control threshold but which may nonetheless be examined under Article 22 European Union Merger Regulation (EUMR)procedure (see, for example, NVIDIA/Run:ai below). It has also been considering whether investment and partnerships between large digital companies and generative AI developers should be assessed under the EU's merger control rules (see, for example, Microsoft/Open AI and Microsoft/Inflection below, and note policy brief at page 6 and speech by EVP Margrethe Vestager at the European Commission workshop on "Competition in Virtual Worlds and Generative AI" on 28 June 2024).

Algorithmic collusion is also on the Commission's radar. In its Guidelines on Horizontal Cooperation Agreements, the Commission has set out two key principles for the treatment of pricing algorithms. Firstly, if the "pricing practices are illegal when implemented offline, there is a high probability that they will also be illegal when implemented online". And secondly, "firms involved in illegal pricing practices cannot avoid liability on the ground that their prices were determined by algorithms. Just like an employee or an outside consultant working under a firm's "direction or control", an algorithm remains under the firm's control, and therefore the firm is liable even if its actions were informed by algorithms." (Horizontal Cooperation Guidelines at paragraph 379).

While the Digital Markets Act does not explicitly refer to AI as a core platform service, EVP Margrethe Vestager noted at the European Commission workshop on "Competition in Virtual Worlds and Generative AI", on 28 June 2024, that the obligations under the Act will also apply to AI where it is “embedded in designated core platform services such as search engines, operating systems and social networking services.

  • On 1 March 2024, the European Patent Office implemented updated "Guidelines for Examination" that introduce stricter disclosure requirements of sufficiency for AI-related inventions. The guidelines emphasise that AI models must be described in sufficient detail to enable a skilled person to reproduce the claimed invention's technical effect without undue burden. While there is no obligation to disclose the specific training dataset itself, its characteristics must be adequately described if they are necessary to achieve the claimed technical effect.
  • As of 1 July 2024, the European Union Intellectual Property Office (EUIPO) established an "Executive Advisory Committee" focused on AI, inclusivity, and sustainability. In alignment with the AIA, the EUIPO is expected to issue specific guidelines addressing the implications of AI on intellectual property, which aim to ensure compliance with EU copyright laws by requiring AI companies to establish policies that respect intellectual property rights - including mechanisms for rights holders to opt out of text and data mining.
  • On 7 February 2025, the European Copyright Society published an opinion examining the challenges that generative AI poses to copyright law and the AI Act. The Society highlighted several key areas that require urgent attention from the European Union, including the scope of the text and data mining exception, transparency requirements, rights holders’ ability to opt-out under the AI Act, and fair remuneration of authors and performers. As an independent organisation of copyright experts, the Society emphasises the need for clearer guidelines to ensure a fair balance between fostering AI innovation and protecting intellectual property rights, particularly concerning research exemptions and the interplay between existing copyright regulations.

Explore the latest landmark rulings as AI-related disputes make their way through the courts.

Ongoing

SOMI v TikTok and X. In February 2025, the Dutch Foundation for Market Information Research (SOMI), a Dutch non-profit organisation that focuses on privacy, data protection, and digital rights, brought four collective actions in Germany against X and TikTok, including under the AI Act. Amongst other things, SOMI claims that TikTok manipulates young users by using addictive design to maximise engagement, and so falls within the AI Act's prohibition on manipulative AI. Similar claims seem to have been made in relation to X's alleged violations of the AI Act.

Concluded

Microsoft/OpenAI. While the European Commission did not open a formal merger investigation in connection with Microsoft’s investment and the following firing and re-hiring of Open AI’s CEO, it considered whether it could be reviewable under the EU Merger Regulation. Ultimately, the Commission found that Microsoft did not acquire control of OpenAI on a lasting basis and therefore decided not to review the partnership under EU merger control rules. However, the EU is still considering whether the transaction might lead to potential antitrust concerns related to exclusivity.

Microsoft/Inflection. Unlike in Microsoft/Open AI, the Commission considered that this transaction involved “all assets necessary to transfer Inflection's position in the markets for generative AI foundation models and for AI chatbots to Microsoft” and that “the agreements entered into between Microsoft and Inflection as a structural change in the market that amounts to a concentration as defined under Article 3 of the EUMR”. While the transaction did not meet the turnover thresholds for a review under the EU merger control rules, seven Member States referred the transaction to the Commission under the Article 22 referral mechanism available under the EU Merger Regulation. However, following the General Court’s ruling in Illumina/Grail, the Member States withdrew their request for referral and the investigation did not proceed any further.

NVIDIA / Run:ai. In an important development in December 2024, the European Commission cleared NVIDIA's acquisition of Run:ai, despite the transaction not triggering a notification under the EU's merger control rules. The transaction was notified in Italy upon the Italian Competition Authority exercising its "call-in" powers (the power to review transactions that do not meet the turnover thresholds but which may pose a concrete risk to competition and meet the other conditions set out in the Italian Competition Act), which then referred the matter to the Commission. While press reports suggest NVIDIA has challenged the Commission's powers to review this transaction before the EU's General Court, this transaction and subsequent developments before the EU Courts will likely set out a road map for the treatment of transactions and partnerships in the AI sector.


Key contacts

Kyriakos Fountoukakos photo

Kyriakos Fountoukakos

Managing Partner, Competition Regulation and Trade, Brussels

Kyriakos Fountoukakos
Dr Morris Schonberg photo

Dr Morris Schonberg

Partner, Brussels

Dr Morris Schonberg
Nika Nonveiller photo

Nika Nonveiller

Associate, Brussels

Nika Nonveiller
Giulia Maienza photo

Giulia Maienza

Senior Associate (Italy), London

Giulia Maienza
Duc Tran photo

Duc Tran

Of Counsel, London

Duc Tran
Miriam Everett photo

Miriam Everett

Partner, Global Head of Data Protection and Privacy, London

Miriam Everett
Pietro Pouché photo

Pietro Pouché

Partner, Milan

Pietro Pouché

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Sydney Perth Brisbane Melbourne Technology, Media and Entertainment, and Telecommunications Emerging Technology Artificial Intelligence Technology, Media and Telecommunications Artificial Intelligence Tech Regulation Digital Transformation Emerging Technologies AI and Emerging Technologies Kyriakos Fountoukakos Dr Morris Schonberg Nika Nonveiller Giulia Maienza Duc Tran Miriam Everett Pietro Pouché