As the ICO published its response to the Department for Science, Innovation and Technology’s (DSIT) long awaited White Paper ‘AI regulation: a pro-innovation approach’ recently, we take a closer look at the 'adaptable approach' set out in the White Paper, to future proof AI regulation through a context-specific and sector-led approach in line with five common principles.
Spotlight on AI
In an effort for the UK to become a leader in technology and take advantage of Brexit freedoms, the Government has announced plans to increase investment, improve regulation and stimulate growth in technology and science, with a particular focus on artificial intelligence (AI) in the UK. Amongst other things, these plans include regulatory reform, the commitment of up to £3.5 billion in investment in technology and science and have been set out in the newly released:
- 2023 Spring Budget;
- Vallance Report;
- Updated ICO guidance on AI & data protection; and
- the White Paper ‘AI regulation: a pro-innovation approach’.
In recognition of the significant advancements and contributions made possible through the use of AI, the Government has credited AI as one of the five technologies of tomorrow in the UK Science and Technology Framework policy paper published on 6 March 2023.
In light of this, the White Paper was published by the DSIT on 29 March 2023 and saw a particular focus on AI regulation, in line with the Government’s National AI Strategy with a view to securing the UK’s place as an AI superpower by 2030.
For further information on the 2023 Spring Budget, the Vallance Report and ICO guidance on AI & data protection please refer to our blog post here.
Current regulatory landscape
Adopting some of the recommendations in the Vallance Review, the White Paper seeks to clarify and update the current legislative regime, identifying the top six potential risks triggered by AI as risks to human rights, safety, fairness, privacy, societal wellbeing and security. While some of these risks are already regulated by existing frameworks, such as data privacy legislation, there are stills gaps in the current uncoordinated approach to regulating AI. Misaligned regulatory regimes across industries makes compliance more costly and time-consuming and could reduce investors’ confidence in AI businesses, pushing start-ups out of the market.
A new innovative approach
The White Paper characterises the Framework with the following qualities:
- Pro-innovation: enabling rather than stifling responsible innovation.
- Proportionate: avoiding unnecessary or disproportionate burdens for businesses and regulators.
- Trustworthy: addressing real risks and fostering public trust in AI in order to promote and encourage its uptake.
- Adaptable: enabling us to adapt quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve.
- Clear: making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them.
- Collaborative: encouraging government, regulators, and industry to work together to facilitate AI innovation, build trust and ensure that the voice of the public is heard and considered.
Intended to drive prosperity, increase public trust in AI and strengthen the UK’s global position regulating AI, the Framework is designed around 4 key elements:
Defining AI
The Framework aims to create a common understanding of ‘artificial intelligence’, defined by its adaptivity and autonomy. These characteristics make it difficult to explain the logic behind AI outputs and assign responsibility for any outcomes. By defining AI with these characteristics, the Government hopes to leave the Framework’s remit relatively open in anticipation of new technologies.
Context-specific approach
Instead of assigning rules or risk levels to sectors or specific technologies, the Framework will apply to likely outcomes generated by AI. As part of this context-specific approach, regulators may also be required to consider the comparative risks of using AI against the cost of missing opportunities to implement the technology in their respective sectors.
Cross-sectoral principles
The Framework will be based on the following five cross-sectoral principles which were identified in the earlier policy paper ‘Establishing a pro-innovation approach to regulating AI’ published on 18 July 2022:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
The principles will initially be introduced on a non-statutory basis to allow regulators the freedom to find the most effective pro-innovation approach to implementation. Regulators will be expected to issue guidance relevant to their sectors on how the principles interact with existing legislation and how best to achieve compliance.
New regulator functions
To aid regulators’ adoption of new functions, the Government proposes to offer the following support, principally through a central monitoring and evaluation ecosystem:
- Monitoring, assessment and feedback functions to measure the regime’s overall effectiveness and survey emerging trends;
- Supporting coherent implementation of the principles through regulatory guidance for both regulators and businesses;
- Developing cross-sectoral risk assessments, testbeds and sandboxes; and
- Ensuring interoperability with international regulatory frameworks.
Technical standards
Working to technical standards specific to AI and relevant sectors will be key to ensuring smooth and coherent adoption of the Framework. AI-specific technical standards already identified in international and regional regimes address topics including risk management, transparency, bias, safety and robustness. The White Paper states that regulators will also be expected to create a flexible list of technical standards which apply to their sector.
Global interoperability and next steps
The 'adaptable approach' adopted by the White Paper is intended to future proof regulation by empowering existing regulators to tailor context-specific and sector-led approaches in line with the principles set out above. The government intends to avoid introducing both a new single regulator for AI governance or heavy-handed legislation, an approach that is distinct from the European Commission’s comprehensive centralised legislative framework in the draft EU AI Act which is set to undergo a vote by the European Parliament’s IMCO (Internal Market and Consumer Protection) and LIBE (Civil Liberties, Justice and Home Affairs) committees towards the end of April.
The White Paper makes it clear that the UK Government aims to make the UK a world-leader in AI regulation. Ranked as third in the world for AI publications and generating £3.7 billion in the AI market, the UK is already an active member of the OECD AI Governance Working Party and the Council of Europe Committee on AI. The Government plans to continue actively shaping the global AI regulatory framework and will consider the interoperability of the Framework as it develops.
Over the next year, the Government plans to publish an AI Regulation Roadmap, engage in partnerships with leading organisations and design a central monitoring and evaluation framework to oversee the Framework’s development. In the meantime, individuals and organisations are invited to respond to consultation questions listed in Annex C of the White Paper in relation to the Framework. The consultation closes on 21 June 2023.
For further information on the ICO’s response to the White Paper please refer to our 'April Data Wrap' later this month and our 'March Data Wrap' can be found here.
Key contacts
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.