Introduction
The use of artificial intelligence ("AI") and the question of appropriate regulation is a topic that is receiving a great deal of commentary. One particularly interesting use of AI is in the context of decisions by public authorities that are governed by public law. The use of AI by the UK Government and other public bodies is increasing. Potential benefits from the use of AI include increased efficiency, improved productivity, the reduction of costs and reduction of human error in some basic tasks. AI can also remove bias (although whether it does so and does not introduce different biases will depend on the design and operation of the AI process).
However, public bodies' decisions must also be lawful, fair, reasonable and proportionate in a public law sense. Even where AI does not solely make the decision, issues can arise. The in-depth paper in this link produced by Andrew Lidbetter looks at these issues and sets out some objections that those who receive decisions from public authorities (individual citizens, commercial entities or other organisations) might have to the authorities' use of AI and how the objections could be put forward based on legal principles (including if they reach court).
Possible objections to a decision which has involved the use of AI
Those affected might have various objections to decisions which have involved the use of AI, such as where it is perceived that a computer has taken the decision rather than a human. This raises the legal principle that, save where the law allows delegation, decisions must be made by whoever is granted the exercise of the power. Delegation to AI is also restricted under the UK General Data Protection Regulations.
They may also argue that the use of AI lacks transparency, making it difficult for them to understand how decisions were made. This focuses on the potential deficiency in understanding and clarity surrounding the use of AI algorithms in decision-making processes and could also raise issues of procedural fairness. Without understanding how AI usage and decision-making align, stakeholders may find it challenging to assess whether a policy has been applied appropriately.
Discrimination is a further key potential objection, particularly if AI algorithms perpetuate existing biases in data. The paper discusses this including in the context of the Equality Act 2010.
Another significant factor is data usage. Individuals may be concerned about whether data has been used appropriately by the public authority. Here, the paper brings attention to the principles under the UK GDPR and the Data Protection Act 2018, which mandate lawful, fair, and transparent processing of personal data.
Concerns about AI decision-making aligning with published policy, and the use of AI undermining the application of the public authority's judgement and discretion, are additional potential objections. The paper discusses non-fettering of discretion, whereby public authorities should not apply a policy inflexibly and must consider individual circumstances.
Those subject to decisions might find AI decisions seemingly odd or irrational. The paper discusses the potential for challenges based on irrationality, unreasonableness, and procedural fairness where a decision is unsupported by suitable evidence or affected by a material mistake of fact. An AI-driven decision that does not provide equal treatment for those in similar circumstances could also be challenged as irrational.
The paper also discusses the specific issues in obtaining information/evidence in judicial review challenges that may exacerbate the problem of holding public body use of AI to account - particularly burden of proof, use of expert evidence, the duty of candour, and the effect of short court time limits.
Avoiding or mitigating the risks of objections to the use of AI in public body decision making
Addressing the potential risks of objections is crucial in order to foster confidence in the use of AI and to reduce the risk of successful legal challenges. The final main section of the paper briefly explores strategies to eliminate or reduce these risks effectively.
First, Parliament could enact legislation that provides a framework for use of AI in decision making by public authorities. A further way of imposing standards is to adopt international instruments (although there can be differences of approach between different regimes, as has become apparent recently). With AI being a global issue, lessons might be learned from legislation elsewhere.
Secondly, there might be good practices which, if adopted, would avoid or mitigate the risk of the above objections. The Government has recently launched an AI Playbook, which sets out ten principles to be followed when using AI. These principles cover ethical usage, meaningful human control, and understanding the limitations and appropriateness of AI tools for decision-making, among other themes.
Finally, the novel challenges facing society as public authorities increasingly use AI in decision making has also led other non-governmental organisations such as the Public Law Project and JUSTICE to put forward ideas which the Government might wish to consider. It is to be hoped that such reports will be studied carefully, to help secure high standards and legality in decision making that involves the use of AI by public authorities.
Key contacts
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.