Follow us

Recent developments in AI regulation make it clear that the burden, and risk, of regulating this exciting but difficult new area in the UK will fall squarely on existing sectoral regulators for now, including those regulators who will not necessarily have significant technical expertise in this high-tech space.

We look at some of those developments and consider the difficulties for regulators, and the implications for those subject to regulation who need to keep a close eye on their regulator(s) in this context over coming months.

Regulators' role in broader AI regulation

In stark contrast to the recent EU AI Act, the UK Government's response to its AI Regulation White Paper made it clear that, at least for now, a light touch sector-led pro-innovation approach remains the favoured course (see our more detailed blog post here). It placed existing regulators front and centre of this approach. Multiple sectoral regulators, including the CMA, FCA, Ofcom, Ofgem, ONR, HSE and others, have been tasked with outlining their strategic approach to AI by the end of April. In addition to outlining steps they are already taking in line with the principles-based approach set out by the Government, they should also be analysing AI-related risks in their sectors, considering their own current capability to address AI and producing a forward looking plan for the next 12 months. This is quite a task in a short timeframe if regulators have not yet engaged fully with the impact AI may have in their area.

To assist, the Department for Science, Innovation & Technology ("DSIT") has produced its own initial guidance to regulators on Implementing the UK's AI Regulatory Principles. It sets out considerations to which regulators may wish to have regard when developing their own tools and guidance on AI, but is not intended to be prescriptive as issues are ultimately left to regulators' discretion. DSIT notes that it will perform some centralised functions, including supporting coherence across regulators and reviewing potential gaps. However, DSIT is clear that it will not be taking responsibility for regulation – it remains the responsibility of regulators to develop their own guidance and they are trusted to know their remits best.

Regulators are told they can establish published policy material, in respect of AI, that is consistent with their respective regulatory objectives, setting out clearly and concisely the outcomes regulators expect, so that regulated firms can meet these expectations through their actions. They should consider the nature and the severity of AI risk in the context and the audience(s) they are targeting. They should also be considering relevant guidance published by other regulators.

Ofcom has already published its strategic approach to AI and is probably the regulator most familiar with this area given its technology focus. It supports the Government's high level AI principles and explains that its focus is on the outcomes for consumers and markets rather than the underlying technology used. Ofcom sees potential AI issues in a number of areas it already regulates, giving the Online Safety Act as a recent high-profile example. However even for Ofcom, which has in house technical expertise (including AI experts) and which has clearly already spent time considering AI, the document is fairly generic. The only real direction to industry is an encouragement to adopt and embrace the AI principles where possible and maintain open dialogue.

Other regulators, whose employees may be technical experts in their particular sector but who do not have a technology focus, are now being expected to turn their hands to something completely new, with only generalised high-level guidance from DSIT. It will be interesting to see what level of detail regulators are able to produce in their own tools and guidance over coming months, with the expectation being that their guidance will be similarly high level and principles based so as not to stifle the innovation and agility championed by Government.

For those subject to such regulation, and themselves getting to grips with AI, there is a risk that they are expected to meet uncertain and untested standards. If and when any real detail emerges, it seems likely that there will be variations in the approaches taken by different regulators, which may result in discrepancies in regulation between sectors. For organisations who fall within the remit of more than one regulator, it may prove impossible to comply with competing regulatory approaches. Those subject to sectoral regulation should consider ongoing dialogue and engagement with their regulator as they develop their own use of AI.

As with all regulatory guidance, the risk of challenge by way of judicial review will hang over the regulators, and be a potential tool for the organisations they supervise, to ensure that development of this new area of regulation takes place fairly and in accordance with the necessary protections and safeguards that public law principles provide. Whereas regulators can often feel confident that their expert decision making in their particular area will be given due deference by the courts in judicial review, the same may not be true of their attempts to make decisions regulating the use of technology of which they might have only a limited level of experience and knowledge.

Public sector use of AI

At the same time, regulators and the broader public sector have to practice what they preach in terms of their own use of AI.

Earlier this year, the Government published the Generative AI Framework for HM Government. It seeks to guide public sector organisations in their use of generative AI, containing both abstract advice (its second principle is that civil servants use generative AI lawfully, ethically and responsibly) and technical detail on how generative AI models work.

Members of this team have previously considered some of the legal risks of the use of generative AI by public bodies in an expert Q&A for Practical Law, available here.

The 2024 Framework replaces earlier, briefer guidance to civil servants on the use of generative AI, which was quite limited in scope, and seemed primarily aimed at civil servants using generative AI on an ad hoc basis, to assist them with tasks such as drafting policy papers. The 2024 Framework is more expansive, covering not only ad hoc use but also the use of generative AI in larger projects to increase productivity, reflecting the expected increase in use of AI in the public sector going forward. Indeed, recent reports from both the National Audit Office and the Turing Institute indicate there is already notable use of AI in the public sector.

The 2024 Framework sets out 10 broad principles for the use of generative AI as follows:

  • Principle 1: know what generative AI is and what its limitations are.
  • Principle 2: use generative AI lawfully, ethically and responsibly.
  • Principle 3: know how to keep generative AI tools secure.
  • Principle 4: have meaningful human control at the right stage.
  • Principle 5: understand how to manage the full generative AI lifecycle.
  • Principle 6: use the right tool for the job.
  • Principle 7: be open and collaborative.
  • Principle 8: work with commercial colleagues from the start.
  • Principle 9: have the skills and expertise that you need to build and use generative AI.
  • Principle 10: use these principles alongside your organisation's policies and have the right assurance in place.

Of these, the most relevant for holding public bodies to account using public law are Principles 2 and 4. Principle 2 touches on some of the significant risks in the use of generative AI in decision-making, including fairness, discrimination and accountability. Principle 4 states that an appropriately trained and qualified person needs to review AI tool outputs, and that there needs to be validation of all decision making that generative AI outputs have fed into. These requirements tie into accountability and transparency – there needs to be human review to ensure that there is human accountability for the decisions taken, and, at a minimum, the system needs to be transparent to those conducting that review.

On the crucial issue of transparency, the government has announced that the Algorithmic Transparency Recording Standard (ATRS) will become mandatory for government departments, with a further roll-out to other public sector organisations in due course. This should significantly improve transparency, but it is improving from a poor starting point, as the ATRS has not been widely used in its voluntary phase. Greater transparency should allow greater scrutiny from society as a whole, and those specifically affected by the use of AI in their interactions with public bodies. For example, the Advertising Standards Authority has publicised its increasing use of AI and plans to scale this use up significantly in its monitoring and compliance work. This level of visibility should be standard practice for regulators. On a similar theme, the Cabinet Office has also released a separate note on improving transparency of AI use in procurement, again aimed at public bodies.

Comment

The UK seems to be sticking to its approach of avoiding centralised specialist regulation of AI for now, but of course some structure and certainty is needed as both technological developments, and public interest and concern, grow at pace. Existing sectoral regulators appear to be the identified answer to this conundrum. Often already under resourced and with enough on their plates, they are now being asked to do a great deal more in a new area with only rather generalised guidance to assist and direct them. The capacity for inconsistency across sectors, or more concerningly, rushed and unclear guidance, is clearly significant and may pose real challenges to commercial organisations operating in those areas who want to take advantage of the opportunities presented by AI.

At the same time expectations around responsible and transparent use of AI by regulators themselves, and other public bodies, are becoming more prescriptive. Regulators may think they are caught between a rock and a hard place, having to take an innovative and agile approach to regulating others' use of AI but being held to high standards and needing to be cautious themselves about using such developments to aid their own functions and ever-increasing workloads.

Those subject to regulation in these areas should be keeping a close eye on their regulator, not only for the strategic plans they produce to govern use of AI in their sectors, but also to ensure they are complying with their own obligations.

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
Daniel de Lisle photo

Daniel de Lisle

Associate, London

Daniel de Lisle

Related categories

Key contacts

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
Daniel de Lisle photo

Daniel de Lisle

Associate, London

Daniel de Lisle
Andrew Lidbetter Nusrat Zar Jasveer Randhawa Daniel de Lisle