Follow us

UK – PLATFORM REGULATION (AND OTHERS)

In a bid to reduce the volume of misinformation which emerged online during the Covid-19 pandemic, many platforms introduced a number of new policies which rely on machine-based moderation. Against the backdrop of a changing policy landscape, the CDEI sets out the outcomes from an expert forum held in 2020, including recommendations and next steps to be taken to help mitigate misinformation.

Key date(s)

  • July 2020 – The Centre for Data Ethics and Innovation ("CDEI") hosts an expert forum with a range of stakeholders, including platforms, fact-checking organisations, media groups, and academics.
  • 5 August 2021 – The CDEI publishes a report (the "CDEI Report") on the role of AI in addressing misinformation on social media platforms, which details the findings from the expert forum.

Status

  • Issues relating to misinformation became prevalent during the Covid-19 pandemic, as social media platforms began uploading false information which put the general public at risk. Due to the decline in human moderators, many social media platforms turned to algorithmic based vetting of content to ensure that it was not misinformation, however, AI is not able to detect the subtleties and nuances of all misinformation. On the other hand, AI not intended to identify harmful content could lead to unintended censorship.
  • To address these issues, the CDEI hosted an expert forum which sought to understand:
    • the role of algorithms in addressing misinformation on platforms, including what changed during the pandemic and the limits of what algorithms can do;
    • how much platforms tell us about the role of algorithms within the content moderation process, and the extent to which there should be greater transparency in this regard; and
    • views on the effectiveness of platform approaches to addressing misinformation, including where there may be room for improvement in the immediate future.
  • The CDEI Report sets out the key findings emerging from the debates surrounding moderation and misinformation with a view to ensuring the technical efficacy of content moderation tools and policies.

 What it hopes to achieve 

  • The CDEI Report aims to improve the efficacy of moderation tools by working with the Department for Digital, Culture, Media & Sport ("DCMS") on the Online Safety Data Initiative, which will enable better and safe access to high quality datasets that can be used for AI moderation training.
  • Misinformation is generally a legal harm, meaning that platforms can choose whether and how to address it. This results in a range of policies and approaches to addressing misinformation (i.e. removing content, labelling content, promoting authoritative sources of information etc.).
  • The key considerations discussed to address misinformation on social media platforms include:
    • pre-moderation, which would result in greater platform liability for publishing content;
    • downranking content by adjusting platform recommendations;
    • promoting authoritative data by prioritising trusted and credible sources, adding labels and fact-checking; and
    • increasing transparency measures of the policies and processes which platforms put in place.
  • The overall recommendation was that platforms should be more transparent with how algorithms are used and further research was merited to improve users' collective understanding.

Who does it impact? 

  • The CDEI's recommendations are primarily addressed to social media platforms, including user-generated websites/applications. This includes key players such as Facebook, Twitter, TikTok and YouTube, which focus on providing a personalised and relevant experience for their users.
  • However, as well as measures to tackle misinformation, the CDEI Report also provides practical information for users of platforms and other stakeholders such as media groups and academics.

Key points 

  1. The role of algorithms
    • Algorithms are essential to the content moderation process by filtering out banned content, detecting signs of misinformation and making content decisions automatically at a speed and scale that would not be possible for human operators alone. Despite the useful role of algorithms, they are not a cure-all as they often wrongly identify content as rule-breaking or incorrectly identify rule-breaking content as non-rule-breaking. Inherently, AI is poor at contextual interpretation and is not yet able to correctly identify misinformation with complete certainty.
  1. Rapid spread of misinformation due to over reliance on algorithms
    • Due to a reduction in the moderation workforce in light of the Covid-19 pandemic, platforms increased reliance on algorithms led to substantially more content being incorrectly identified as misinformation. This is because misinformation is often subtle and context dependent, making it difficult for automated systems to analyse.
  1. Social media platforms are the catalyst for the spread of misinformation
    • Platforms have faced criticism that they have acted as catalysts for the spread of misinformation, while doing too little to curtail the negative impacts. It is up to the platform how to address misinformation and this is commonly done by removing rule-breaking content or labelling content to indicate that it may be false.
  1. Greater transparency under the new Online Safety Bill
    • Transparency measures will allow for a better understanding of the policies and processes put in place to address misinformation. The new Online Safety Bill establishes a duty of care on online services to improve the online safety for their online users, and will be overseen by Ofcom. The Bill would also require Ofcom to establish an advisory committee on disinformation and misinformation, and will make use of transparency reports in this context. The CDEI Report recommends that additional information regarding the precision of detection algorithms, how algorithms are trained, metrics for measuring outcomes and clarity on content defined as borderline rule-breaking should also be disclosed.

 


Links

CDEI convenes expert group to advise on Online Safety

CDEI Policy Paper on the Role of AI on social media

Review of online targeting

Draft Online Safety Bill

Online Safety Data Initiative


Related developments

Online Safety Bill Published

UK National AI Strategy

CMA research paper on algorithms

 

 

This blog post provides an overview of a key recent or upcoming development in digital regulation in the UK or EU as part of our horizon scanning timeline which can be found below.

Contacts

VIEW DIGITAL AND REGULATION TIMELINE  + 

Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
James Balfour photo

James Balfour

Senior Associate, London

James Balfour

Key contacts

Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
James Balfour photo

James Balfour

Senior Associate, London

James Balfour
Hayley Brady James Balfour