Follow us

In November the UK will be hosting the first global summit on the regulation of AI at Bletchley Park. The recent growth of programs such as ChatGPT has prompted much debate around standards for the creation and deployment of artificial intelligence, and even concerns about the future of humanity. But what about the deployment of AI tools by governments and regulators?

Algorithms have been deployed in certain areas of public law decision-making for significantly longer than ChatGPT has been available, but there is a question as to whether the potential benefits and pitfalls of this use – and in particular the use of advanced systems – have been sufficiently debated or whether this issue has simply flown under the radar. Research conducted by Herbert Smith Freehills showed that only 5% of UK consumers are unconcerned about the growing presence of AI in everyday life. Regulated entities may tend to agree if they turn their minds to the issue.

With advanced AI tools expected to become embedded in widely used software, the operation of AI and algorithms will surely contribute more and more to governmental and regulatory decisions. High value decisions, such as on key procurement projects, subsidies, or development consent for large developments, are unlikely to be wholly made by AI tools at least for the moment, but they are likely to be used as part of the process.

The risks of large language models such as ChatGPT have been debated in the past few months, with high-profile examples of hallucinations, where the model confidently describes events that do not exist. These risks and others demonstrate that the use of AI in public sector decision making is not without its issues. It is vital that those issues are acknowledged, understood and managed to ensure key public law principles for the protection of those dealing with public bodies continue to be upheld.

How AI is already being used, and how might it be used going forward

Algorithms are sets of rules that turn inputs into outputs: for instance, an algorithm might take a transaction as its input, and assign it a risk of being fraudulent as its output, depending on various data points about the transaction. Advanced AI programs use algorithms aimed at becoming more effective at performing the task at hand. When used in the regulatory context, the algorithm or AI program is generally not the sole decision-maker – it is used at a stage in the process, or to guide decision-makers towards areas where their impact may be greater.

AI has clear advantages for often under resourced regulators. Algorithms give regulators an opportunity to marshal the large volumes of data they amass. The FCA processes over 1 billion transactions every month in Market Oversight. Its web scraping techniques, aimed at spotting potential harm, scan approximately 100,000 websites every day. Both the FCA and the CMA have data lakes, parts of which are then used by their respective algorithms. The CMA’s DaTa unit provides AI tools for its merger review teams to aid document review, including tools which indicate how likely it is that a document is about competition-related topics, and one which indicates whether a document discusses activity that is within the UK. AI is therefore already embedded as a regulatory tool that is important to enable regulators to fulfil their public functions efficiently.

Regulators around the world are increasingly using AI to address the particular challenges they face. In Denmark, public procurement represents 14% of GDP and 24% of the Danish government’s total expenditure. The Danish Competition and Consumer Authority has created a tool called Bid Viewer, to assist it to screen for potential cartels in public procurement bids. The tool appears to be able, for instance, to flag where companies may have agreed geographical areas where they will not bid against each other, where firms alternate having the lowest bid between each other, and where two organisations simply do not enter procurement exercises against each other. Such analyses could, on their face, have been completed by a well-resourced team without using such tools, but the tools are likely to have made such analyses quicker, and guide analysts towards situations more likely to be of interest to the regulator.

Another example of the use of AI comes from food hygiene inspections. In 2022, the Food Standards Agency piloted the use of an algorithm to predict the food hygiene rating of an establishment that was awaiting its first rating. The aim was to allow inspectors to prioritise for first inspection those newly opened establishments that were predicted to have low ratings.

It is easy to see how this type of tool could be adapted elsewhere as regulators deal with increased demands and constrained resources. For example, an algorithm could prioritise certain businesses for increased regulatory scrutiny where they are predicted to be more likely to have issues with compliance. But it is also easy to see the potential pitfalls of such an algorithm. If trained on a database of past enforcement actions, it could become infected with biases based on the regulator’s past decision-making. Companies could become the equivalent of a restaurant opening where previous restaurants did not clean their kitchens properly – not doomed by association, but subject to heightened scrutiny by association. The management time and the effect on reputation of dealing with a regulatory investigation can be considerable, even where those investigations close at an early stage with no adverse findings. Issues could become self-perpetuating: an algorithm with a tendency to recommend businesses from a particular country for further scrutiny, then leads to more enforcement cases being opened, which leads to the algorithm recommending further firms from that country for scrutiny.

Considerations for decision-making bodies

There are numerous practical considerations for decision-making bodies to ensure that AI is used in a way that is compliant with public law duties. One is ensuring that how the algorithm generates each result is documented in a transparent way. In the House of Commons Science, Innovation and Technology Committee’s recent interim report on the governance of artificial intelligence, they counted this among their twelve challenges of AI Governance and referred to it as the Black Box challenge. This is a concern of the Key principles for an alternative AI white paper, signed by a range of NGOs, which states as its first principle that transparency must be mandatory. The transparency that the NGOs advocate for is transparency to those subject to or impacted by decisions – at a bare minimum, people should know AI is being used in their situation and how decisions are being made. Without transparency there can be no accountability. Transparency will enable the decision-makers to spot issues with an algorithm at alpha or beta stage, and allow for routine quality control once the algorithm is live. If the algorithm’s rationales for its output make little sense, or show evidence of bias (another of the twelve challenges highlighted by the House of Commons Committee), changes can be made. It should enable decision-makers to show a clear, documented process in the event of challenge.

Decision-makers will be well used to documenting the reasons for decisions, but they will need to consider all steps of the process and whether algorithms were used at any stage. For instance, AI may be able to summarise large volumes of information, and this or similar tools could be used as part of document review exercises. If the program inadvertently only summarises the first 50 pages of a 100-page document, missing out key sections, this is likely to feature heavily in any subsequent challenge by way of regulatory appeal or judicial review. A transparent, documented process would make it easier for decision-makers to spot the error before any final decision is made.

Ensuring that algorithms document their process should help to minimise the risk that there is a disconnect between how decision-makers think an algorithm is generating results, and how it is actually doing so. A witness statement explaining that while the decision-maker had thought that the program was operating in one way, but that it was in fact operating in quite another, may fail to convince a judge some way down the line that the decision is sound. The same problem can of course arise with human decision-makers – one colleague believes that another has reached their view using a certain process, and in fact their colleague has taken a different approach. But it is more acute when algorithms are used, as it may be difficult to piece together later the factors behind a decision that was not documented at the time, because you cannot, generally, conduct a witness interview with an algorithm.

In time, if AI programs are eventually used to make substantive decisions, a whole host of other issues will need to be addressed. Would that constitute improper delegation or fettering of discretion for example? Where a piece of legislation mandates certain factors to be taken into account, how can that be guaranteed and evidenced? Does procedural fairness include a right to some form of human involvement in a process? How can a regulator demonstrate a decision was rational if they do not themselves understand exactly how it was made? These and other considerations – which could also arise in relation to steps in regulatory processes before substantive decisions are made – should be balanced against practical benefits and properly debated before widespread substantive use of AI takes place in the regulatory field.

Considerations for those subject to decisions

There are equally practical considerations for those on the receiving end of a decision from a public body, and the current lack of transparency highlighted by NGOs causes issues for organisations considering challenging any regulatory action against them. The Algorithmic Transparency Recording Standard, launched earlier this year, gives government departments the option to disclose how they are using algorithms. This is a relatively low bar in terms of transparency. In the absence of the kind of transparency which NGOs seek in the use of AI in government, those preparing for or considering challenging a regulatory decision may need to consider carefully which questions to ask the relevant public authority to extract the right information, and may need to remind public bodies of their duty of candour.

In engaging with decision-makers, organisations should think through how different types of AI, and different datasets, may affect decisions. Inconsistencies in a dataset can have magnified effects. An example would be separate entries in a dataset for every year of a past multi-year investigation into a particular business, where similar investigations against others are entered only once. The algorithm then treats these as three separate investigations into the business, and flags it as having persistent issues with compliance. Those using the data are not aware of the issue, and decisions are flawed as a result. It will be increasingly important for legal teams to know what datasets might be relevant, how they were put together, and to interrogate how those datasets are being used by public authorities in decision-making.

One practical point is that if organisations can get a sense of what algorithms will be used in the process, this can inform their preparation. If programs will be used to summarise documents, and those programs are available as part of generic office software, consider running documents through those programs prior to submission to check that key points are reflected in the summaries.

All these issues have heightened relevance in the regulatory context, where courts traditionally show significant deference to the decision making of an expert and experienced regulator. If in fact key aspects of that decision making are conducted by an AI program, businesses may argue that the same margin of discretion and hands-off approach cannot be justified.

Conclusion

Amongst the wave of negative press that AI has recently faced, it is important to recognise the considerable opportunities for regulators if algorithms are used well. They can make decision-making teams more productive, and focus limited resources on areas of highest impact. Certainly many of those facing regulatory action would welcome quicker and more efficient regulatory investigations if AI can assist with aspects such as document review. But the particular risks for public bodies using algorithms must be addressed. Transparency and the possibility of bias are chief among them given the importance in public law of independent, impartial and reasoned decision making which allows organisations to understand the decision against them at least in enough detail to know whether or not there is something to challenge. Those subject to decisions should be alert to the kinds of issues that algorithms can cause, and willing to pursue tailored and specific lines of inquiry. When on the receiving end of an unwelcome letter from a regulator, a company’s first thought is unlikely to be “maybe an algorithm has been trained on flawed data” or “maybe a key part of our material was never read by decision-makers“, but perhaps its second or third thought should be.

 

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
James Wood photo

James Wood

Partner, London

James Wood
Daniel de Lisle photo

Daniel de Lisle

Associate, London

Daniel de Lisle

Article tags

Related categories

Key contacts

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
James Wood photo

James Wood

Partner, London

James Wood
Daniel de Lisle photo

Daniel de Lisle

Associate, London

Daniel de Lisle
Andrew Lidbetter Nusrat Zar Jasveer Randhawa James Wood Daniel de Lisle