Follow us


The UK currently has no dedicated AI regulation, instead opting to rely on existing legislation (in domains such as data protection, human rights, employment, competition, financial services, IP, and consumer rights), case law (particularly on IP and data), and guidance from regulators including the FCA, CMA, ICO & Ofcom (brought together in the Digital Regulation Cooperation Forum) and various others like the Equalities & Human Rights Commission and Medicines & Healthcare Products Regulatory Authority. AI-specific regulation is starting to appear in various draft bills, while more comprehensive AI regulation is expected that targets the developers of the most powerful AI 'foundation' models.

The UK's AI policy set out by the former Conservative government in its policy white paper (here and here) has been carried forward by the current Labour government. The white paper established a cross-sector, outcomes-based framework for regulating AI, establishing five principles for existing regulators to interpret and apply within their remits to drive safe, responsible AI innovation (read our summary of the white paper here and government response here):

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

The government asked regulators in February 2024 to publish updates on their strategic approach to AI, to increase transparency on how they are implementing these white paper principles. The Department for Science, Innovation, and Technology (DSIT) - the centre for digital expertise and delivery in government - published regulators' responses in May 2024. Read our summary of the responses here.

On 13 January 2025, the Labour government unveiled the AI Opportunities Action Plan, alongside the government's response, accepting all 50 of its recommendation in full or in part. The Action Plan consolidates the UK's pro-innovation approach to AI, prioritising accelerating AI adoption and investment while maintaining a hands-off approach to regulation. The goal is to position the UK as "an AI maker, not an AI taker", with the government planning to deliver the commitments by 2027.

Helpful resources include the AI Standards Hub, which is dedicated to the standardisation of AI technologies and contains a library of hundreds of AI standards and policies, and the DRCF AI & Digital Hub, which provides innovators with a means to obtain free and informal advice on cross-regulatory queries from the FCA, CMA, ICO, and Ofcom.

There is no specific AI regulation in the UK.

A UK AI Bill was expected following the first King's Speech in July 2024 but did not appear. The current Labour government indicated that it intends to introduce limited AI regulation targeting developers of the "most powerful" foundation models in due course. They reinforced this intention in the Action Plan, confirming that the DSIT will consult on proposed legislation "to provide regulatory certainty".

From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.

The Information Commissioner's Office (ICO), the UK data protection regulator, has published guidance on how to apply existing data laws to AI, and an AI toolkit to help organisations identify and mitigate risks during the AI lifecycle.

Key data protection and digital information legislation consists of the following: 

  • The UK General Data Protection Regulation (UK GDPR) (complemented by the Data Protection Act 2018 (DPA) and e-privacy legislation) requires organisations processing personal data to comply with principles of fairness, transparency, purpose limitation, minimisation, accuracy, accountability, storage and security, among others. 
  • The Data (Use & Access) Bill aims to reduce barriers to innovation and improve access to personal data for research and public services purposes. It received its first reading in the House of Lords in October 2024 and second reading on 19 November 2024. Overall, it will amend some parts of UK GDPR but does not serve to completely overhaul UK data protection and e-privacy law. It follows the UK's pro-innovation, light touch, principles-based approach to the regulation of AI, with a lack of AI-related protections (e.g., in relation to data scraping) and a relaxation of the Article 22 GDPR prohibition on automated decision-making. Read our summary of the Bill here and here.

During 2024, the ICO ran a consultation series on generative AI (GAI) & data protection, covering the lawful basis for web scraping to train GAI models, purpose limitation in the GAI lifecycle, accuracy of training data and model outputs, engineering individual rights into GAI models and allocating controllership across the GAI supply chain. The ICO reported on the outcomes of the consultation series and detailed its policy position in December 2024. 

Other ICO guidance includes its eight questions that organisations developing or using AI that process personal data need to ask themselves. It has also launched an Innovation Advice service for AI innovators' queries. 

On 11 February 2025, the ICO, along with four other data protection authorities, signed a Joint Statement at the Paris AI Action Summit, on building trustworthy data governance frameworks to encourage the development of privacy-protective AI.Read our guide on AI and data protection here.

Bias and discrimination arising through use of AI systems is governed by the Equality Act 2010, which prohibits direct or indirect discrimination on grounds of protected characteristics such as age, sex, race, religious beliefs, sexual orientation, or disability, generally also applicable in the context of a workplace, by customers, or by other service users. If an individual complains that a decision made using an AI system was discriminatory, the burden shifts to the deployer of the AI system to prove that no unlawful bias or discrimination played a role in the decision affecting the impacted subject or, in the event of indirect discrimination, that any disadvantage can be justified.

Organisations that wish to use AI systems in the workplace must also comply with a patchwork of employment related laws. In addition to UK GDPR and the DPA (see section above on data protection and digital information), which apply equally to workers, workers also have rights under the European Convention on Human Rights in respect of AI systems which involve monitoring or surveillance of employee activities. Use of the output of AI systems to make decisions about workers or achieving efficiencies in the workplace via the use of AI systems will need generally to comply with employment law obligations, including in certain circumstances providing information about the AI system to a worker or their representative.

The Artificial Intelligence (Regulation and Employment Rights) Bill was proposed by the Trade Union Congress in April 2024. It aims to provide further protections for workers impacted by AI systems making "high risk" determinations. Its proposals include:

  • greater transparency obligations;
  • workplace risks assessments prior to deploying AI systems;
  • a statutory right to consultation prior to deployment of high-risk AI systems; and
  • a right for unions to receive data about union members used to make AI workplace decisions.

The Employment Rights Bill produced by the Labour government in October 2024 did not contain legislation relating to AI and equality law or employment, though its explanatory memorandum referenced a forthcoming consultation on potential legislation relating to workplace surveillance technologies.

Read our series on AI and employment law here.

To date the Intellectual Property Office (IPO), the official UK body responsible for intellectual property (IP) rights, has provided limited recommendations in respect of AI.

With the government having accepted recommendations proposed by Sir Patrick Vallance in his review on pro-innovation regulation for digital technologies in March 2023, the UK IPO established a working group of creative industries and AI developers to agree a voluntary code of practice, which would overcome barriers faced by AI businesses in accessing copyright and database right materials. However, due to the strength of copyright owners' rights under the existing legislation, this working group failed to reach agreement and was disbanded in February 2024.

To address the impasse, the government is now considering responses to a wide-ranging consultation on the following issues to emerge around the treatment and potential infringement of copyright through training and use of AI systems:

  • The Copyright Designs and Patents Act 1988 (CDPA), under which aspects of AI, like source code, may attract copyright protection, and others, such as curated training datasets, may attract database protection. Some exceptions exist, notably text and data mining; this is limited solely to non-commercial research.
    • In 2022, the then-Conservative government considered broadening the scope of exception to permit mining for commercial purposes, but this proposal was dropped in early 2023 following opposition from the creative industries. Read our summary here.
    • On 17 December 2024, a new government consultation proposed, amongst other things, a text and data mining exception similar to that under Article 4 of the Digital Copyright Directive, permitting data mining for any purposes, including commercial ones. This new proposal follows suit of the dropped proposal, but also poses an associated question regarding steps that are required for the copyright owners to effectively opt out if the exception is introduced. It also addresses other issues arising under the CDPA in the training and deployment of AI. Read our summary of the new consultation here.
  • Copyright and Rights in Databases Regulations 1997, which offers another form of database protection distinct from database copyright, and subsists where a substantial investment has been made in obtaining, verifying, or presenting the contents of the database. Both database rights may coexist for one database.
  • The Patents Act 1977 and Patents Rules 2007. Patenting AI technologies can be difficult, as mathematical methods and computer programs "as such" are excluded subject matter. However, use of AI algorithms to obtain an advantageous result may be. The UK IPO published guidelines for examining patent applications relating to AI inventions in May 2024; these are suspended pending the IPO's consideration of the judgment of the Court of Appeal in Emotional Perception AI [2024] EWCA Civ 825. The Supreme Court has given leave for a further appeal, so this case has not yet reached a conclusion. 

Read our series on AI and IP here.

The Competition & Markets Authority (CMA) – the principal competition and consumer protection authority in the UK – has identified technology as an area of strategic focus in its 2024/25 Annual Plan, indicating that it will be "vigilant of any competition concerns". It noted in its AI strategic update in April 2024 that these include AI exacerbating or taking advantage of existing problems and weaknesses in markets (particularly for recommendations, pricing, or personalisation) and the most powerful AI models being developed by a small number of the largest incumbent technology firms, giving them the ability and incentive to shape the development of markets in their own interests, and allowing them to protect existing market power and extend it into new areas.

  • The CMA published a review of foundation models in September 2023 setting out principles for development of foundation models, including accountability to consumers, access to key inputs, interoperability, fair dealing and transparency. This was followed by an update paper on potential competition risk posed by foundation models in April 2024.
  • In July 2024, the CMA and competition authorities from the EU and US published a joint statement on competition in generative artificial intelligence, foundation models & AI products which noted that the rapid evolution of GAI puts us at a "technological inflection point" requiring vigilance from competition authorities to guard against tactics "that could undermine fair competition".
  • The CMA has reviewed several AI partnerships under its merger control regime, including Microsoft & Mistral Amazon & Anthropic, Microsoft & Inflection, Alphabet & Anthropic and Microsoft & OpenAI (ongoing). None of the AI partnerships investigated by the CMA to date has met the threshold for referral to an in-depth Phase 2 review and there is currently limited clarity on the CMA's approach to the substantive assessment of their impact on competition. Microsoft/Inflection is the only merger decision at this stage that includes any substantive analysis, with the CMA focusing on two theories of harm: the loss of competition in the development and supply of consumer chatbots and the loss of competition in the development and supply of foundation models.

Read our guide on the CMA's review of AI partnerships under its merger control rules here.

The Digital, Markets and Competition and Consumers Act (DMCCA) was enacted in May 2024 and came into force on 1 January 2025, with consumer law changes will take effect from April 2025. Key points relevant to AI include:

  • Enhanced CMA Consumer Enforcement powers: The DMCCA grants the CMA new powers to directly enforce consumer protection laws, with significant financial penalties for non-compliance. Some of the new consumer rights, such as the ban on fake reviews, will be very relevant as consumers could be exposed to false and misleading information, either due to foundation models and AI systems generating such information, or because AI-based technologies enable bad actors to create false or misleading information more easily.

The CMA has launched investigations into Google and Amazon over concerns that they may not have been doing enough to prevent fake reviews. Google recently accepted undertakings from the CMA which include a requirement to sanction businesses that boost their star ratings with fake reviews and individuals who write fake reviews for UK businesses.

  • Digital Markets Regime: This ex-ante regulatory regime is designed to proactively shape the behaviour of undertakings designated with strategic market status (SMS). Once designated with SMS, the CMA will have the power to set out how the undertaking is expected to behave in respect of the activities for which it is designated, including AI deployment, through the imposition of targeted conduct requirements such as fair dealing, choice and transparency.
  • The CMA will also be able to make pro-competition interventions (PCIs) on undertakings designated with SMS, which will be aimed at addressing the root causes of an undertaking's entrenched market power. The CMA has launched investigations under the new regime into Google's search and search advertising services, and into Apple's and Google's respective mobile ecosystems.

The Office of Communications (Ofcom), the  UK's communication services regulator, published its strategic approach to AI in March 2024, aligning its approach to online safety with the five principles set out in the government's AI policy white paper. Ofcom is seeking to address three cross-cutting risks: synthetic media (deepfakes), personalisation, and security and resilience.

In December 2024, Ofcom published its Illegal Harms Codes of Practice following its consultation on Protecting people from illegal harms online, which included measures to mitigate synthetic media and personalisation risks. Specifically, the Codes proposed measures on accountability and governance to manage deepfakes and measures recommending the collection of safety metrics when testing recommender systems which could cause personalisation harm, including AI-driven recommender systems. On security and resilience, Ofcom is monitoring advanced AI systems and its use to develop and deploy tools that create cyber risk for networks, and how security risks may arise from integrating generative AI into systems.

Under The Online Safety Act (OSA), generative AI chatbot tools and platforms are regulated with respect to online harm, as underscored in Ofcom's Open letter to UK online service providers regarding Generative AI and chatbots. The OSA places duties on providers of user-to-user services and search services to protect users from illegal content and certain "lawful but harmful" content online. It requires providers to consider how their services may expose users to illegal harmful content, by carrying out risk assessments and taking steps to mitigate those risks identified. In addition, the sharing of AI-generated 'deepfake' intimate images is a priority offence under the OSA and the Government has re-introduced the proposal for an offence for making sexually explicit deepfakes, as part of its new Crime and Policing Bill.

Read our guide to the Online Safety Act here.

The Advertising Standards Authority (ASA), the UK advertising regulator, has advised that its Committee of Advertising Practice Codes are generally "media-neutral" and advertisements on AI products or using AI-generated content must not breach the existing Codes, particularly with respect to harm and offence, and misleading advertising.

The ASA has advised against AI-washing, i.e. exaggerating AI capabilities of a product or service, which is likely to amount to misleading advertising. In compliance with the UK Code of Broadcast Advertising and the UK Code of Non-broadcast Advertising and Direct & Promotional Marketing, companies should take care not to exaggerate claims about a product's AI features and make it clear where the benefits of AI would only apply to certain users or in certain circumstances. Companies must be able to substantiate any claims they make about the AI functionalities in their products.

The ASA has also advised on advertisements using AI-generated content, affirming that if used to make unsubstantiated efficacy claims, advertisements could mislead consumers in breach of the Codes. In October 2023, it upheld a complaint against Codeway for breaches of UK advertising rules. The complaint concerned a paid-for Instagram post for an app showing an extremely blurry image next to a sharp and clear image, with the caption 'Enhance your Photos with AI'. The ASA ruled that consumers would likely understand the ad to be an objective claim about the app's capabilities and that, in the absence of evidence to show that the results were achievable, the ad was misleading by way of exaggeration.

The National Security and Investment Act came into force on 4 January 2022. This introduced a new standalone investment screening regime on national security grounds, which has resulted in a sea-change in the regulatory environment in the UK. It applies – at least in principle – equally to both non-UK and UK investors.

Mandatory notification obligations apply to certain transactions where the target company is engaged in specified activities in one or more of 17 sensitive sectors, including AI and advanced robotics (definitions of which are very detailed and broad). Where mandatory notification is required, the transaction cannot be completed prior to clearance. The Secretary of State also has broad powers to call in a wider range of transactions on national security grounds in any sector, provided that at least "material influence" is acquired (a low threshold that could in certain circumstances capture an acquisition of a shareholding below 15%, for example when combined with board representation).

The Investment Security Unit (ISU) has quickly established itself as one of the most active FDI authorities globally and receives around 900 notifications each year. Most notified transactions (around 95%) are cleared unconditionally within an initial 30-working day review period. No final orders prohibiting transactions or imposing conditions on clearance have been issued to date in the AI or advanced robotics sectors, but the most recent data indicates that 15% of call-in notices initiating an in-depth investigation involved the AI sector, and 15% involved the advanced robotics sector. The ISU pro-actively monitors market intelligence and regularly initiates investigations into non-notified transactions. 

Read our insights on the latest trends in enforcement under the NSI regime here.

Explore the latest landmark rulings as AI-related disputes make their way through the courts.

In progress

Getty Images v Stability AI [2023] EWHC 3090 (Ch) is currently pending trial in summer 2025 before the High Court and concerns allegations of intellectual property rights infringement (copyright, database, and trade mark rights) arising from the alleged unauthorised use of Getty Images' content by Stability AI through "online scraping" for the training of the model underlying Stability AI's systems. Getty alleges that Stability AI infringed its rights in the act of training and in the production of certain outputs from Stability AI's systems that are said to substantially reproduce copyright material exclusively licensed to Getty Images, as well as infringing their registered trade marks. The case has also raised issues around whether a claim for this type of infringement could include a representative claim, covering a common class of copyright owners who had licensed certain copyrights exclusively to Getty Images. Based on the current position, the representative claim has been disallowed.

Concluded

Thaler v Comptroller-General of Patents [2024] 2 All E.R. 527. The Supreme Court rejected Dr Thaler's argument that applications for patents naming an AI system (DABUS) as the inventor of two inventions, holding that the wording of the Patents Act 1977 requires a human inventor.

Reaux-Savonte v Comptroller-General of Patents, Designs & Trade Marks [2021] EWHC 78 (Ch). The applicant's patent for an 'AI Genome' (a data structure mirroring the structure of the human genome) was rejected as data structured in a modular, hierarchical, and self-contained manner - so excluded from patentability as a computer program. The Court supported this.

Comptroller-General of Patents, Designs and Trade Marks v Emotional Perception AI Ltd (appeal) [2024] EWCA Civ 825. The Comptroller rejected a patent for a neural network that provided media file recommendations on the basis that it was excluded as a "computer program…as such". The High Court reversed this. However, the Court of Appeal found in favour of Comptroller – the neural network was a computer program and made no technical contribution to justify awarding a patent. Emotional Perception indicated an appeal to the Supreme Court and the Supreme Court has granted leave for that appeal to go ahead.


Key contacts

Nick Pantlin photo

Nick Pantlin

Partner, Co-Head of Technology, Digital & Sourcing practice, London

Nick Pantlin
Andrew Moir photo

Andrew Moir

Partner, Intellectual Property and Global Head of Cyber & Data Security, London

Andrew Moir
Heather Newton photo

Heather Newton

Of Counsel, London

Heather Newton
Sian McKinley photo

Sian McKinley

Of Counsel (Employed Barrister), London

Sian McKinley
Veronica Roberts photo

Veronica Roberts

Partner, UK Regional Head of Practice, Competition, Regulation and Trade, London

Veronica Roberts
James Balfour photo

James Balfour

Senior Associate, London

James Balfour
Duc Tran photo

Duc Tran

Of Counsel, London

Duc Tran
William Garton  photo

William Garton

Senior Associate, London

William Garton

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Sydney Perth Brisbane Melbourne Technology, Media and Entertainment, and Telecommunications Emerging Technology Artificial Intelligence Technology, Media and Telecommunications Artificial Intelligence Tech Regulation Digital Transformation Emerging Technologies AI and Emerging Technologies Nick Pantlin Andrew Moir Heather Newton Sian McKinley Veronica Roberts James Balfour Duc Tran William Garton