Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
The EU adopted in 2024 the world's first AI-specific comprehensive legal framework – the AI Act. The AI Act takes a risk-based functional approach, under which the degree of regulation depends on the potential risks that certain AI functions may have for human health, safety and human rights. It is supplemented by civil liability and product safety legislation, which provide additional safeguards ensuring safe AI use. Finally, the European Commission (EC) has been proactively exploring the application of competition law rules to the AI sector, recognising the importance of maintaining a competitive and fair market environment as technology and digital markets continue to evolve.
The EU’s approach to artificial intelligence was initially set out in 2018 in the European AI Strategy, following which the European Commission presented its 2021 AI package that consisted of:
The Commission further launched an AI innovation package that sets out measures to support European startups and SMEs in the development of trustworthy AI.
A key pillar of the European AI Strategy is a human-centric and trustworthy AI ecosystem which creates a safe and innovation-friendly environment for users, developers, and deployers. To contribute to building of trustworthy AI, the Commission has proposed three key interrelated legal initiatives:
The Commission's 2025 Work Programme outlines its ambition to boost competitiveness, enhance security and bolster economic resilience in the EU. This includes a series of Omnibus packages designed to simplify EU policies and laws, which EC Vice-President for Tech Sovereignty, Security and Democracy announced would include a package intended to address the overlap between the EU AI Act, the Digital Services Act, the Digital Markets Act and the General Data Protection Regulation (GDPR). A proposal for a Cloud and AI Development Act was also announced, as part of the AI Contingent Action Plan which aims to capitalise on the opportunities provided by AI.
Helpful resources include the European Commission's website on European approach to artificial intelligence which provides comprehensive information about the EU's strategy and policies on AI, as well as the important milestones. |
The AI Act is the EU's horizontal regulatory framework in relation to all AI systems. Broadly speaking, it adopts a functional "risk-based" approach, with the degree of regulatory intervention depending on the function of the AI - the use to which it is to be put. The regulatory obligations are imposed on providers (developers) and deployers (users) of AI systems and apply to operators located both within and outside the EU, so long as the output from the AI system is used in the EU.
The different regulatory categories under the AI Act are as follows:
|
From 2 February 2025, all providers and deployers of AI systems have been obliged under the AI Act to ensure a sufficient level of AI literacy of their staff dealing with those AI systems. The EU AI Office published a living repository of AI literacy best practices on 4 February 2025.
The AI Act is a voluminous piece of legislation. However, the obligations are articulated for the most part in terms of results, rather than operational or technical detail. These specifics will be further set out in due course, including for:
In order to further clarify the application of the AI Act in practice, the AI Office regularly hosts the "AI Pact Events", a full list of which can be accessed here.
Implementation timelineWhile the AIA was adopted in mid-2024, it will be implemented incrementally over the next few years, with the following key start-dates:
|
Read our key takeaways on the AI Act here.
From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.
Explore the latest landmark rulings as AI-related disputes make their way through the courts.
SOMI v TikTok and X. In February 2025, the Dutch Foundation for Market Information Research (SOMI), a Dutch non-profit organisation that focuses on privacy, data protection, and digital rights, brought four collective actions in Germany against X and TikTok, including under the AI Act. Amongst other things, SOMI claims that TikTok manipulates young users by using addictive design to maximise engagement, and so falls within the AI Act's prohibition on manipulative AI. Similar claims seem to have been made in relation to X's alleged violations of the AI Act.
Microsoft/OpenAI. While the European Commission did not open a formal merger investigation in connection with Microsoft’s investment and the following firing and re-hiring of Open AI’s CEO, it considered whether it could be reviewable under the EU Merger Regulation. Ultimately, the Commission found that Microsoft did not acquire control of OpenAI on a lasting basis and therefore decided not to review the partnership under EU merger control rules. However, the EU is still considering whether the transaction might lead to potential antitrust concerns related to exclusivity.
Microsoft/Inflection. Unlike in Microsoft/Open AI, the Commission considered that this transaction involved “all assets necessary to transfer Inflection's position in the markets for generative AI foundation models and for AI chatbots to Microsoft” and that “the agreements entered into between Microsoft and Inflection as a structural change in the market that amounts to a concentration as defined under Article 3 of the EUMR”. While the transaction did not meet the turnover thresholds for a review under the EU merger control rules, seven Member States referred the transaction to the Commission under the Article 22 referral mechanism available under the EU Merger Regulation. However, following the General Court’s ruling in Illumina/Grail, the Member States withdrew their request for referral and the investigation did not proceed any further.
NVIDIA / Run:ai. In an important development in December 2024, the European Commission cleared NVIDIA's acquisition of Run:ai, despite the transaction not triggering a notification under the EU's merger control rules. The transaction was notified in Italy upon the Italian Competition Authority exercising its "call-in" powers (the power to review transactions that do not meet the turnover thresholds but which may pose a concrete risk to competition and meet the other conditions set out in the Italian Competition Act), which then referred the matter to the Commission. While press reports suggest NVIDIA has challenged the Commission's powers to review this transaction before the EU's General Court, this transaction and subsequent developments before the EU Courts will likely set out a road map for the treatment of transactions and partnerships in the AI sector.
Managing Partner, Competition Regulation and Trade, Brussels
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs