Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
The UK currently has no dedicated AI regulation, instead opting to rely on existing legislation (in domains such as data protection, human rights, employment, competition, financial services, IP, and consumer rights), case law (particularly on IP and data), and guidance from regulators including the FCA, CMA, ICO & Ofcom (brought together in the Digital Regulation Cooperation Forum) and various others like the Equalities & Human Rights Commission and Medicines & Healthcare Products Regulatory Authority. AI-specific regulation is starting to appear in various draft bills, while more comprehensive AI regulation is expected that targets the developers of the most powerful AI 'foundation' models.
The UK's AI policy set out by the former Conservative government in its policy white paper (here and here) has been carried forward by the current Labour government. The white paper established a cross-sector, outcomes-based framework for regulating AI, establishing five principles for existing regulators to interpret and apply within their remits to drive safe, responsible AI innovation (read our summary of the white paper here and government response here):
The government asked regulators in February 2024 to publish updates on their strategic approach to AI, to increase transparency on how they are implementing these white paper principles. The Department for Science, Innovation, and Technology (DSIT) - the centre for digital expertise and delivery in government - published regulators' responses in May 2024. Read our summary of the responses here.
On 13 January 2025, the Labour government unveiled the AI Opportunities Action Plan, alongside the government's response, accepting all 50 of its recommendation in full or in part. The Action Plan consolidates the UK's pro-innovation approach to AI, prioritising accelerating AI adoption and investment while maintaining a hands-off approach to regulation. The goal is to position the UK as "an AI maker, not an AI taker", with the government planning to deliver the commitments by 2027.
Helpful resources include the AI Standards Hub, which is dedicated to the standardisation of AI technologies and contains a library of hundreds of AI standards and policies, and the DRCF AI & Digital Hub, which provides innovators with a means to obtain free and informal advice on cross-regulatory queries from the FCA, CMA, ICO, and Ofcom. |
There is no specific AI regulation in the UK.
A UK AI Bill was expected following the first King's Speech in July 2024 but did not appear. The current Labour government indicated that it intends to introduce limited AI regulation targeting developers of the "most powerful" foundation models in due course. They reinforced this intention in the Action Plan, confirming that the DSIT will consult on proposed legislation "to provide regulatory certainty".
From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.
Explore the latest landmark rulings as AI-related disputes make their way through the courts.
Getty Images v Stability AI [2023] EWHC 3090 (Ch) is currently pending trial in summer 2025 before the High Court and concerns allegations of intellectual property rights infringement (copyright, database, and trade mark rights) arising from the alleged unauthorised use of Getty Images' content by Stability AI through "online scraping" for the training of the model underlying Stability AI's systems. Getty alleges that Stability AI infringed its rights in the act of training and in the production of certain outputs from Stability AI's systems that are said to substantially reproduce copyright material exclusively licensed to Getty Images, as well as infringing their registered trade marks. The case has also raised issues around whether a claim for this type of infringement could include a representative claim, covering a common class of copyright owners who had licensed certain copyrights exclusively to Getty Images. Based on the current position, the representative claim has been disallowed.
Thaler v Comptroller-General of Patents [2024] 2 All E.R. 527. The Supreme Court rejected Dr Thaler's argument that applications for patents naming an AI system (DABUS) as the inventor of two inventions, holding that the wording of the Patents Act 1977 requires a human inventor.
Reaux-Savonte v Comptroller-General of Patents, Designs & Trade Marks [2021] EWHC 78 (Ch). The applicant's patent for an 'AI Genome' (a data structure mirroring the structure of the human genome) was rejected as data structured in a modular, hierarchical, and self-contained manner - so excluded from patentability as a computer program. The Court supported this.
Comptroller-General of Patents, Designs and Trade Marks v Emotional Perception AI Ltd (appeal) [2024] EWCA Civ 825. The Comptroller rejected a patent for a neural network that provided media file recommendations on the basis that it was excluded as a "computer program…as such". The High Court reversed this. However, the Court of Appeal found in favour of Comptroller – the neural network was a computer program and made no technical contribution to justify awarding a patent. Emotional Perception indicated an appeal to the Supreme Court and the Supreme Court has granted leave for that appeal to go ahead.
Partner, Intellectual Property and Global Head of Cyber & Data Security, London
Partner, UK Regional Head of Practice, Competition, Regulation and Trade, London
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs