Follow us

We at HSF were delighted to host a panel discussion on the regulation of AI in the UK recently for JUSTICE, a topic that is of particular interest to us in the public law team as regular readers of our blog will know.

Listening to the discussion it's clear that AI really is different. There have been very few developments in our adult lives that have sparked the kind of first principles debates that are now taking place:

  • Should we regulate at all or leave it to market forces?
  • What impact does regulation actually have? Is it pro or anti-innovation?
  • What should regulation and Government intervention be trying to achieve?

The general expectation is that the Labour government will introduce legislation to meet its manifesto commitment of "binding regulation on the handful of companies developing the most powerful AI models". What is unclear is how broad the scope of that legislation or regulatory framework will be, or how detailed. Could it really be the case that there would be legislation aimed at private commercial organisations that develop and sell AI technology, but none governing public authorities who procure that technology and use it to make high stakes, high impact decisions on issues such as immigration, access to benefits, education and healthcare. Would that be justifiable given the imbalance of power between the state and citizens? This would seem particularly counter-intuitive if the argument for regulating just a handful of big players is the concentration of power in their hands.

At the moment however, recent developments seem to be going the other way. Lord Clement-Jones has introduced a private members bill in the House of Lords aimed exclusively at public body use of AI (the Public Authority Algorithmic and Automated Decision-Making Systems Bill). Earlier this month the UK signed its first legally binding international treaty – the Council of Europe's Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. The latter is certainly a welcome development, particularly given some of the international players that have already signed up, notably the EU and the US. We could of course get into the limitations of unincorporated treaties in domestic law, something public lawyers know all too well, but instead we should focus on the fact that this illustrates at least co-ordination and intent. It is necessarily a high-level principles based document, but does cover aspects such as remedies and procedural safeguards in addition to describing the principles that should be applied within the lifecycle of AI systems. Interestingly, this treaty also puts the main focus on public sector use of AI. Article 3 distinguishes between the obligations on public authorities and private actors acting on their behalf (to apply the Convention) and those relating to other private actors (a watered-down version that requires States to address risks and impacts in a manner conforming with the object and purpose of the Convention). In a more concrete domestic development, the Department for Science, Innovation and Technology has been reported as confirming that the Algorithmic Transparency Recording Standard is now mandatory for all Government departments, imposing at least a minimum standard of transparency in relation to Government use of AI.

The idea of having duties that fall only on public authorities is well established and understandable in other context such as the Human Rights Act and the Equality Act. It is less clear why it would be necessary or a good idea in relation to AI. Many argue that it is public sector use that is capable of causing the most prejudice, and that it should be the use of AI in a way that directly impacts citizens' lives that should be regulated rather than the design and development of AI. The reality is that most public authorities will be purchasing and using technology from private companies and will no doubt want to incorporate their own obligations into their procurement, thus effectively integrating those duties into the supply chain. The very nature of AI may suggest that a single regulatory framework applicable regardless of public or private status is the best way to achieve clarity and an equal playing field. Public bodies will still have their additional existing legal obligations in public law or statute to provide the extra layer of protection that may be needed.

Clearly there is a lot to discuss and to think about as the UK plots its course through the minefield of how to deal with AI and its role in our future society. Opinions differ vastly depending on your background, perspective, career and how you think AI might end up impacting your life. We are still at the stage where there are more questions than answers, but that may be no bad thing.

Where we end up is ultimately a policy decision for our democratically accountable decision makers. What is vital is that the right questions are asked along the way and a broad range of views sought and taken into account. Regulation is undoubtedly at its most effective when it is a partnership between the public sector and the industry. Both sides will need to input into the process and share expertise to achieve a regulatory framework that is clear, realistic, workable, proportionate and flexible enough to deal with an ever-changing landscape. Both sides also need to be able to hold each other to account.

We watch and wait with interest to see how the regulation of AI develops. For now, we leave you with a thought from Dr Susie Alegre at the JUSTICE event which perfectly encapsulates why public law and AI overlap: "It can't be fair and reasonable if you can't tell me why it happened".

Related categories

AI

Key contacts

Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
James Wood photo

James Wood

Partner, London

James Wood
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
Nusrat Zar James Wood Jasveer Randhawa