Follow us

Update: Following the prorogation of Parliament on 24 May 2024, the Criminal Justice Bill (including the proposal to create a new offence for making sexually explicit deepfakes) will not be progressing further in its passage through Parliament. However, the existing offence for sharing such deepfakes, introduced by the Online Safety Act 2023, has been and remains in force since 31 January 2024. We will be monitoring steps taken by the new government in relation to the regulation of deepfakes – watch this space.

On 16 April 2024, the UK Government announced its proposal for a new law criminalising the creation of sexually explicit deepfakes (see press release here). The new offence will be introduced as an amendment to the upcoming Criminal Justice Bill. This builds on the existing offences for sharing 'deepfake' intimate images which was first introduced as a priority offence under the Online Safety Act ("OSA").

While the new offence focuses on the criminalisation of making sexually explicit deepfakes and is a landmark development for the protection of women and girls, it also reaffirms broader trends in the UK's approach to regulating AI and online safety and adds to the growing list of legislation addressing the potential harms of AI.

With a general election date now set for 4 July 2024, it remains to be seen whether any new Government would take a more pro-regulatory approach to regulating this emerging technology.

Crackdown on intimate deepfakes

The use of deepfakes is becoming increasingly widespread online – while this can often be for legitimate and commercially beneficial purposes (see for example our blog post on the use of deepfakes in advertising), reports of harmful deepfake images, including intimate and/or sexually explicit deepfakes, are also on the rise. The discussion on this type of image-based abuse came to a head earlier this year upon the release of deepfake pornographic images of singer Taylor Swift at the end of January.

In response to public sentiment on the issue, the UK Government has strengthened the legal framework on deepfakes by announcing an amendment to the Criminal Justice Bill criminalising the creation of deepfakes which falls under a new offence for "faking intimate photographs or films using digital technology" (as set out in the latest Amendment Paper dated 13 May 2024). This marks a change in approach from the Government's position back in February 2024, when it initially chose to include an offence for sharing deepfakes in the Criminal Justice Bill, but not an offence for making them, on the basis that there was "insufficient evidence of harm to justify the criminalisation of making an intimate image which is not subsequently shared or threatened to be shared" in the Law Commission's 2022 report on "Intimate image abuse".

The new offence builds on existing offences relating to such intimate image abuse including:

  • the offence for sharing of intimate images (including deepfakes) which came into force on 31 January 2024 through section 188 of the OSA as an amendment introducing section 66B Sexual Offences Act 2003; and
  • the 'upskirting' offences introduced in 2019 which are also being updated to include two more serious offences of taking or recording an intimate image or film with intent to cause alarm, distress or humiliation; or for the purpose of sexual gratification as part of the Criminal Justice Bill.

Under the new offence, individuals creating or designing intimate images of another person "using computer graphics or any other digital technology" for the purposes of causing alarm, distress or humiliation to the person, may face a criminal record and an unlimited fine under the new offence. The new offence does not require perpetrators to intend to share the image, but if they do so, they may also be charged under the offence for sharing an intimate image and may face up to 2 years' imprisonment. It is noted that the consent of the subject is a valid defence for both 'making' and 'sharing' offences in relation to sexually explicit deepfakes. The new offence will come into force along with the other new offences in the Criminal Justice Bill which is currently making its way through the House of Commons.

UK's approach to regulating deepfakes and AI

The new law also reaffirms the UK's technology-neutral approach to AI regulation (as set out in the UK Government's AI White Paper) noting that the actual text of the legislation does not refer to deepfakes but to the use of "computer graphics or any other digital technology". As seen in other sectors, for example in the financial sector most recently (see the FCA's AI Update here published last month), the UK's approach has been to rely on existing legal frameworks to regulate AI, in efforts to avoid creating legislation which may become quickly outdated as AI technologies advance. In further support of the sector-led approach, it is envisaged that existing regulators retain a key role in implementing the UK's agile approach to AI, with the Government empowering them to create targeted measures in line with five common principles and tailored to risks posed by the different sectors (see our blog post here on the UK's AI White Paper Response and LLM and Generative AI report released in February).

That said, it remains to be seen whether this light touch approach will remain in full in light of a potential new Government later in the year. Or whether any new Government will want to align itself closer to the EU.

In contrast, in the EU, the recent EU AI Act which was adopted by the European Parliament on 13 March 2024 specifically addresses AI regulation and the regulation of deepfakes (see our blog post on the EU AI Act here). Article 52(3) of the final text of the Act requires "users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake'), shall disclose that the content has been artificially generated or manipulated." Though the AI Act does not go so far as to banning deepfakes, the AI Office is expected to prepare codes of practice providing further guidance on the labelling of deepfakes. It remains to be seen whether the UK will follow suit in due course, however stakeholder responses to the UK Government's AI White Paper "stressed the importance of transparency…[with some suggesting] that labelling AI use would be beneficial to users, particularly in regard to building literacy around potentially malicious AI generated content, such as deepfakes and disinformation." The UK Government is also expected to launch a call for evidence later in the year on "AI-related risks to trust in information and related issues such as deepfakes". This is likely to shed further light on the UK's direction of travel.

Other jurisdictions have also shown signs of a crackdown on deepfakes – for example, China has also adopted regulations specifically targeting deepfakes following major scandals in the past year including the removal of the viral deepfake app ZAO from app stores (see our blog post here). In the US, which has taken a much more pro-innovative approach to AI regulation, there are currently no federal laws against the sharing or creation of deepfake images. However, we are seeing bills including the US House's Preventing Deepfakes of Intimate Images Act, the US Senate's NO FAKES Act and US House's No AI FRAUD Act being proposed (though it is yet to be seen how these will be received).

With regards to criminalising the creation of deepfakes, we are seeing targeted legislation around harmful and sexually explicit deepfakes. For example, similarly to the UK, the European Commission announced on 6 February 2024 its proposal for a Regulation laying down rules to prevent and combat child sexual abuse which includes the criminalisation of deepfakes depicting child sexual abuse.

The road after the Online Safety Act 2023

Further to continued efforts to regulate AI in the UK, the new offence also highlights the UK Government's commitment to combatting online harms following the introduction of the OSA. In addition to the offence for sharing intimate deepfakes, a number of Communication Offences came into force under the OSA on 31 January 2024 criminalising:

  • 'cyberflashing' (section 66A Sexual Offences Act 2003, as inserted by section 187 OSA);
  • 'revenge porn' (also under section 66B Sexual Offences Act 2003, as inserted by section 188 OSA);
  • sending fake news and other false communications (section 179 OSA)
  • sending death threats and other threatening communications (section 181 OSA);
  • 'epilepsy trolling' i.e. sending or displaying flashing images electronically (section 183 OSA); and
  • encouraging or assisting serious self-harm (section 184 OSA).

Some of these offences may result in up to five years' imprisonment. On 19 March 2024, the UK also saw its first cyberflashing conviction under the OSA, with the perpetrator being sentenced to 66 weeks in prison. The new offence complements the suite of Communication Offences and shows how the UK is continuing to think about online harms and ways to keep legislation up to date with rapid developments in technology.

For further information on this and more around deepfakes, please join us as we host the SCL session "Generative AI and Deepfakes: Understanding the Illusion" on 2 July 2024. More information is available here.

 

 

 

 

 

Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Rachel Kane photo

Rachel Kane

Senior Associate, London

Rachel Kane

Related categories

Key contacts

Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Rachel Kane photo

Rachel Kane

Senior Associate, London

Rachel Kane
Hayley Brady Claire Wiseman Rachel Kane