Follow us

The Committee on Standards in Public Life (the “Committee”) has published a report on artificial intelligence (“AI”) and its impact on public standards following the Committee’s review into this fast-developing field (the “Report”).  The Report sets out the Committee’s recommendations for the governance and regulation of AI in the public sector, aimed at ensuring high standards of conduct across all areas of public sector practice.

The Report comes at a time when public bodies continue to increasingly seek to adopt technologically assisted and data-driven decision-making in varied sectors. This trend looks set to continue, albeit with many potential uses of AI in the public sector still at the development or ‘proof-of-concept’ stage. Although work on the ethical use of data is already being carried out by various organisations such as The Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI) and the Information Commissioner’s Office, the Report identifies “significant deficiencies” in the UK’s existing regulatory and governance framework for AI in the public sector and a pressing need for practical guidance and enforceable regulatory standards.

The recommendations in the Report, addressed in more detail below, suggest broad, overarching changes to the regulatory and governance framework and environment that will need buy-in from government if they are to result in a tangible impact on the approach to AI across the public sector. In addition to these systemic proposals, several of the recommendations of the Committee are directed at providers of public services, both public and private. These relate to both the planning stages of projects involving the use of AI and also the implementation stages, including in relation to monitoring and evaluation and also appeal and redress routes that are available to individuals impacted by automated and AI-assisted decisions.

What exactly counts as AI?

As the Report recognises, there is not one, universally accepted definition of what counts as AI. The term can be used to describe a broad range of processes, from simple automated data analysis to complex deep neural networks. Machine learning is an important subset of AI – machine learning systems are trained on existing datasets and identify patterns in the data. The systems employ inference to make predictions, and can automatically hone how they function and learn from experience, without explicit programming instructions.

Why is regulation needed?

The increased use of AI by public bodies has the potential to provide quick, efficient and accurate solutions to challenges faced in the delivery of public services. However, the potential benefits of AI in public administration come with potential challenges – in particular, the Committee identifies that certain key principles of public life such as openness, accountability and objectivity could be threatened by the use of AI, if it is not correctly implemented. Key issues that arise in the use of AI in both private and public settings are issues of transparency and data bias, and these issues, amongst others, have the potential to have significant implications in the context of public decision-making.

If AI is going to be successfully integrated into public sector life, the Report suggests it is essential that the regulatory framework inspires public confidence in the deployment of new technologies. In addition to reassuring the public, the Report notes that implementing clear standards around AI may actually increase the rate of adoption by public bodies, as it will assist organisations to grow their confidence in using AI. ‘A guide to using artificial intelligence in the public sector’ has been published by the Government Digital Service and Office for Artificial Intelligence and is intended to serve as comprehensive guidance for public bodies to use. Nonetheless, the Committee considers that guidance alone is insufficient and that well-established regulation is needed.

Are existing legal tools relevant?

In part, yes. The use of AI in the public sector has the potential to engage several existing legal frameworks. This includes human rights law under the European Convention on Human Rights, which is incorporated into domestic law by the Human Rights Act 1998; equality and non-discrimination law, for example under the Equality Act 2010; data protection regimes, including the General Data Protection Regulation; the Freedom of Information Act 2000; and also common law grounds of judicial review such as illegality and irrationality.

The relevance of existing frameworks is evidenced by the fact that already the Administrative Court has seen challenges to the deployment of the use of algorithmic technologies, such as R (on the application of Edward Bridges) v Secretary of State for the Home Department [2019] EWHC 2341 (Admin) examining the use of automated facial recognition technology by South Wales. In other jurisdictions there is also an emerging body of case law that considers the use of AI by public bodies. For example, there are judgments on the high profile ‘Robodebt’ programme in Australia; COMPAS, a technology used in criminal sentencing in the United States; and the recent NJCM cs/ De Staat der Nederlanden decision in The Netherlands concerning the unlawful use of the SyRI programme, which used data compiled from various public bodies to identify individuals with a higher fraud risk in relation to welfare benefits.

If AI is deployed in the UK without a clear legal framework and in a manner which upholds existing principles of good public sector practice, the likelihood of there being further challenges by way of judicial review in the UK is high. Whilst such challenges have the potential to serve as an important safeguard against inappropriate use of AI by public bodies, the recommendations proposed by the Committee clearly envisage that a more robust regulatory framework is needed, in addition to existing tools, to ensure that high standards of public conduct are upheld. Even with such a framework in place, it is not difficult to envisage that challenges will nonetheless arise, in circumstances where public bodies are grappling with new technologies.

A role for a new AI regulator?

Although the report identifies the need for a regulatory body to have responsibility for identifying gaps in the regulatory landscape of the use of AI in the public sector, the Committee concluded that a new, separate AI regulator is not necessary. Instead it is suggested that the Centre for Data Ethics and Innovation (CDEI) be given an independent statutory footing to act as a central regulatory assurance body, providing advice to existing regulators and government on how to deal with emerging AI related issues in their respective fields (Recommendation 4). This proposal would allow existing regulators to continue to utilise their sector-specific experience whilst also having an expert regulatory body focused exclusively on AI. For this arrangement to function effectively however, it will be important for the regulatory assurance body to have a sufficiently broad remit and powers.

Combatting data bias

As mentioned above, a key issue in the field of AI, in both private and public sectors, is that of data bias. The Committee expresses concerns that the prevalence of data bias poses a threat to a key principle of public life – objectivity. In order to avoid the embedding and amplification of discrimination in public sector practice, the report calls for the application of anti-discrimination law to AI to be clarified. It is suggested that the Equality and Human Rights Commission should develop guidance in partnership with the Alan Turing Institute and the CDEI on how public bodies should best comply with the Equality Act 2010 (Recommendation 3). Another important method to manage data bias is ensuring diversity within AI teams who are designing or developing products. A further recommendation aimed at public and private providers of public services suggests that they must consciously tackle issues of bias and discrimination in order to provide a fair service (Recommendation 10).

Making private sector waves…

The evidence gathered in the course of the review by the Committee suggests that the majority of public bodies utilising AI will rely on external private sector providers to design, produce and even manage their systems. As a way of mitigating the potential issues which may arise from these arrangements, the Report suggests that public procurement requirements ensure that private companies appropriately address public standards when developing AI solutions for the public sector and that tenders and contractual arrangements explicitly deal with these issues (Recommendation 5).

The supply of AI systems by external private entities also poses a challenge for upholding transparency and accountability standards in the public sector. Private companies may seek to protect ‘trade secrets’ which may in turn prevent public bodies from disclosing the inner workings of the AI they are using, irrespective of their impact on the public. The Report recommends that clear guidelines for public bodies should be established as to what information they should disclose about the AI systems they use (Recommendation 8). This is particularly the case where the current proactive disclosure requirements under the Freedom of Information Act 2000 are of limited use in this novel context. Consideration of the introduction of mandatory, published AI impact assessments could also be a way in which transparency could be enhanced (Recommendation 7).

What next?

The discussion on the adoption of AI in the public sector will inevitably continue to develop following the publication of this Report. Government, public bodies, regulators, private companies, technology specialists, lawyers, academics and civil society groups are just some of the actors who have a role to play in the ongoing debate. Whilst the AI regulatory and governance framework may face widespread changes in the future, for the moment at least our existing legal frameworks will need to be applied in a manner which takes account of the specific and novel issues that arise as a result of the use of AI in the public sector.

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa

Related categories

Key contacts

Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter
Nusrat Zar photo

Nusrat Zar

Partner, London

Nusrat Zar
Jasveer Randhawa photo

Jasveer Randhawa

Professional Support Consultant, London

Jasveer Randhawa
Andrew Lidbetter Nusrat Zar Jasveer Randhawa