Follow us

Yesterday European Commission President von der Leyen presented the 2023 State of the Union address which, as anticipated, included a focus on prioritising the responsible use of artificial intelligence. This is set against the global policy discussions around AI at G7 and G20 last week, the impending UK Artificial Intelligence Safety Summit and publication of the House of Commons UK Science, Innovation and Technology Select Committee (the Committee) The Governance of Artificial Intelligence: Interim Report (the Report) (pdf here) on 31 August 2023.

The Report follows a recently conducted inquiry into the impact of AI on several sectors. In particular, it identifies 12 challenges with use of AI and recommends that legislation is introduced to address regulation of AI during this parliamentary session (ie before the UK general election due in 2024). The Committee expresses concerns that the UK will fall behind if there are delays, given the efforts made by the EU and US to regulate AI already.

Whilst the Report usefully identifies in one place key challenges with use of AI, these are not new concepts and the Report does not put forward solutions to address those challenges at this stage. It will be interesting to see the extent to which progress is made in grappling with these issues through the various international cooperation efforts in due course. We will be providing you with the key takeaway from the UK Artificial Intelligence Safety Summit in due course.

2023 State of Union Address: The three pillars of the new global framework for AI

As part of her address, President von der Leyen acknowledged that "Europe has become the global pioneer of citizen's rights in the digital world" including through the Digital Service Act and Digital Markets Act "ensuring fairness with clear responsibilities for big tech".

The President stated "the same should be true for artificial intelligence." In particular, she referenced a recent warning from leading AI developers, academics and experts that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". In doing so, the President described a "narrowing window of opportunity to guide this technology responsibly" and a belief that "together with partners, [Europe] should lead the way on a new global framework for AI built on three pillars: (i) guardrails; (ii) governance; and (iii) guiding innovation."

The EU AI Act was mentioned as a "blueprint for the whole world" as the "world's first comprehensive pro-innovation AI law" in the context of the guardrails pillar and ensuring AI develops in a human-centric, transparent and responsible way. In respect of the "governance" pillar, the President considered laying the foundations for a single AI governance system in Europe (alongside ensuring a "global approach to understanding the impact of AI in our societies"). As well as setting up a body of experts in AI to consider the risks and benefits for humanity, not dissimilar to the invaluable contribution of the IPCC for climate (a global panel that provides the latest science to policymakers) and building on the Hiroshima Process.

In respect of the final pillar guiding innovation in a responsible way, the President announced: (i) a new initiative to open up European high-performance computers to AI start-ups to train their models; (ii) an open dialogue with those developing and deploying AI; and (iii) initiatives to establish voluntary commitments to the principles of the AI Act before it comes into force (akin to the voluntary AI rules around safety, security and trust agreed to by seven major technology companies).

The UK Governance of Artificial Intelligence: Interim Report

(A) The need for regulation now and the establishment of an international forum on AI:

The Report encourages the UK Government to move directly to legislate AI, rather than to apply the approach set out in its White Paper of March 2023. The approach set out in the White Paper envisaged five common principles to frame regulatory activity, guide future development of AI models and tools, and their use. These principles were not to be put on a statutory footing initially but were to be “interpreted and translated into action by individual sectoral regulators, with assistance from central support functions”. The White Paper goes on, however, to anticipate "introducing a statutory duty on regulators requiring them to have due regard to the principles" when parliamentary time allows.

The Report recognises that although the UK has a long history of technological innovation and regulatory expertise, which “can help it forge a distinctive regulatory path on AI“, the AI White Paper is only an initial effort to engage with AI regulation and it’s approach risks the UK falling behind given the pace of development of AI and especially in light of the efforts of other jurisdictions, principally the European Union and United States, to set international standards.

The Report suggests “a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.

(B) 12 essential challenges of AI identified: Of particular note, the Report identifies the challenges associated with use of AI in general and twelve essential challenges that AI governance must address if public safety and confidence in AI are to be secured:

  1. The Bias challenge. AI can introduce or perpetuate biases that society finds unacceptable. The Report warns that inherent human biases encoded in the datasets used to inform AI models and tools could replicate bias and discrimination against minority and underrepresented communities in society.
  2. The Privacy challenge. AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants. Particular emphasis is placed on live facial recognition technology, with the warning that systems may not adequately respect individual's rights, currently set out in legislation such as the Data Protection Act 2018, in the absence of specific, comprehensive regulation.
  3. The Misrepresentation challenge. AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions, or character. The Report attributes the combination of data availability and new AI models to the increasingly convincing dissemination of 'fake news'. Examples given included purporting to show individuals 'passing off' information through voice and image recordings, particularly damaging if used to influence election campaigns, enable fraudulent transactions in financial services, or damage individual's reputations. The Report goes on to warn of the dangers when coupled with algorithmic recommendations on social media platforms targeting relevant groups.
  4. The Access to Data challenge. The most powerful AI needs very large datasets, which are held by few organisations.The Report raises competition concerns caused by the lack of access to sufficient volumes of high-quality training data for AI developers outside of the largest players. There is proposed legislation to mandate research access to Big Tech data stores "to encourage a more diverse AI development ecosystem".
  5. The Access to Compute challenge. The development of powerful AI requires significant compute power, access to which is limited to a few organisations. Academic research is deemed to be particularly disadvantaged by this challenge compared to private developers. The Report suggests efforts are already underway to establish an Exascale supercomputer facility and AI-dedicated compute resources, with AI labs giving priority access to models for research and safety purposes.
  6. The Black Box challenge. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements. The Report calls for regulation to ensure more transparent and more explicable AI models and reckons that explainability would increase public confidence and trust in AI.
  7. The Open-Source challenge. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms. This is a further example of how the Committee view the need to increase the capacity for development and use of AI amongst more widely distributed players. The Report acknowledges the need to protect against misuse as it cites opinions that open-source code would allow malign actors to cause harm, for example through the dissemination of misleading content. There is no conclusion by the Committee as to which method is preferable.
  8. The Intellectual Property and Copyright Challenge. Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced. The Report comments that “Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced” and that whilst the use of AI models and tools have helped create revenue for the entertainment industry in areas such as video games and audience analytics, concerns have been raised about the ‘scraping’ of copyrighted content from online sources without permission.                                                                                                                  The Report refers to “ongoing legal cases” (unnamed but likely a reference to Getty v StabilityAI) which are likely to set precedents in this area, but also notes that the UK IPO has begun to develop a voluntary code of practice on copyright and AI, in consultation with the technology, creative and research sectors, which guidance should “… support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work”. The report notes that the Government has said that if agreement is not reached or the code not adopted, it may legislate. For further information around the IP related challenges please refer to our full blog here.
  9. The Liability challenge. If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done. The Report considers that if AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
  10. The Employment challenge. AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption. It is noted in the Report that automation has the potential to impact the economy and society through displacement of jobs. It highlights the importance of planning ahead through an assessment of the jobs and sectors most likely to be affected. The Report highlights the Prime Minister's attitude to be cognisant of the "large-scale shifts" through providing people with the necessary skills to thrive in the technological age.
  11. The International Coordination challenge. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking. The Report compares the UK pro-innovation strategy, the risk-based approach of the EU and the US priority to ensure responsible innovation and appropriate safeguards to protect people's rights and safety. These divergent approaches contrast with the shared global implications of the "ubiquitous, general-purpose" AI technology as heard by the Committee inquiry, and therefore calls for a coordinated international response.
  12. The Existential challenge. Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security. The 2023 AI White Paper deemed such existential risks as "high impact but low probability" but the debate remains whether such a prospect is realistic. Suggestions are made in the Report of using the international security framework governing nuclear weapons as a template for mitigating AI risks. The Report calls for the government to address each of the twelve challenges outlined and makes clear the growing imperative to accelerate the development of public policy thinking on AI "to ensure governance and regulatory frameworks are not left irretrievably behind the pace of technological innovation".

UK AI Safety Summit

The Report welcomes the global AI Safety Summit, due to be hosted in the UK later this year on 1 and 2 November with a call to address the challenges from the Report and advance a shared international understanding of the challenges and opportunities of AI. The UK government has since set out the focus of the Summit, centring on the risks created or significantly exacerbated by AI and how safe AI can be used for public good. The aim is to make frontier AI safe, ensuring nations and citizens globally can realise the benefits of AI.

The Summit will be framed by the following five objectives:

  1. a shared understanding of the risks posed by frontier AI and the need for action
  2. a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
  3. appropriate measures which individual organisations should take to increase frontier AI safety
  4. areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
  5. showcase how ensuring the safe development of AI will enable AI to be used for good globally

UK Frontier Taskforce

Ahead of the Summit, the UK Government also launched the Frontier AI Taskforce (previously named the Foundation Model Taskforce), to drive forward cutting-edge research, build UK capabilities, and lead the international effort on AI safety, research, and development.

The Taskforce, chaired by Ian Hogarth, have released their first progress report. It sets out the AI researchers and key UK national security figures that form their expert advisory board. The progress report states how the Taskforce is building on and supporting the work of leading technical organisations rather than "starting from scratch". The initial set of partnerships includes ARV Evals, Trail of Bits, The Collective Intelligence Project, and the Center of AI Safety.

Through effective collaboration, the Taskforce can deliver on Challenges 4 and 5 of the Committee's Report outlined above. The progress report confirms leading companies like Anthropic, DeepMind or OpenAI are giving government AI researchers deep model access and through No10 Data Science ('10DS') the engineers and researchers will have the necessary compute infrastructure for AI research inside government to excel.

The progress report contains continued praise for the current team members, while urging more technical experts and organisations to apply to join the Taskforce. This exemplifies their aim of growing the team "by another order of magnitude" because "moving fast matters", particularly with the upcoming AI Safety Summit.

Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Rachel Montagnon photo

Rachel Montagnon

Professional Support Consultant, London

Rachel Montagnon

Related categories

AI EU UK

Key contacts

Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Rachel Montagnon photo

Rachel Montagnon

Professional Support Consultant, London

Rachel Montagnon
Claire Wiseman Rachel Montagnon