As the Online Safety Bill begins its journey in the House of Lords, in this blog we recap on the key changes to the Bill since it was first introduced in the House of Commons and the likely impact of those changes.
In our further blog "The Online Safety Bill: Crystal ball gazing on the road ahead – what can we expect?" we provide an insight into what we might expect for the Bill as it moves through the House of Lords, based on the discussions that ensued during the Second Reading in the House of Lords on 1 February 2023.
Key takeaways
The Bill was refined in the House of Commons in response to criticism from stakeholders. Key changes since it was first published include:
- Introduction of the duty for providers to minimise fraudulent adverts: Category 1 and 2A Services now have duties to reduce users' exposure to fraudulent advertisements on their platforms, including use of preventative measures and content removal, in order to combat the rise of scam adverts online.
- Removal of "legal but harmful" concept for adults: In light of concerns about restricting free speech, duties in respect of "legal but harmful" content for adults have been removed from the Bill and replaced with a so-called "triple shield" approach. This has given rise to a greater focus on: (i) the content and enforcement of content moderation in provider terms of service; as well as (ii) duties to provide adults with user empowerment tools to filter out harmful content.
- Inclusion of priority offences in primary legislation: Illegal content includes certain specific priority offences. The Bill now sets out types of priority offences in primary legislation, (including child sex abuse content, fraudulent and violent content), to ensure that Ofcom can take enforcement action as soon as possible, rather than waiting for the offences to be included in secondary legislation.
- Duty to protect news publisher content: The Bill also introduces obligations to comply with new duties protecting "news publisher content". These include notifying news publishers before removing their content and assessing the impact the regime has on availability and treatment of news publisher content and journalistic content (Category 1 services).
The DCMS' January policy paper outlined the expected impact of these changes to the Bill, in lieu of the more detailed enactment impact assessment which will be published following Royal Assent.
Whilst the DCMS expects the focus on transparency and accountability in terms of service for Category 1 services may result in additional cost to those platforms, these costs are likely to be lower than if the more onerous duties were still in place for content that is harmful to adult. The broader transparency, accountability and freedom of expression duties are likely to result in affected online platforms needing to deploy enhanced content moderation measures for compliance and there may also be an enhanced need to conduct impact assessments on freedom of expression and privacy arising from the additional obligations to protect news publisher content as well.
As the Bill progresses through the House of Lords, we will continue to monitor the impact of subsequent changes for affected platforms.
Background and market context
The Online Safety Bill (the "Bill") is a comprehensive package which seeks to combat illegal and harmful practices online alongside introducing greater accountability for online platforms and search engines and protections of freedom of speech. It was drafted against the background of:
- the UK Government's 2019 Online Harms White Paper which outlined plans for a new statutory duty of care owed by online platforms to their users; and
- the European Digital Services Act ("DSA") which introduced new rules to increase the responsibilities of digital services providers in the EU, including tougher regulation on the management of illegal content. For further information on the DSA, please see our blog post here.
Initial Draft Bill
On 13 May 2021, the UK Government published its proposed draft of the Bill (the "Initial Draft Bill"), which was broad in scope: applying both to (i) internet services that allow users to share user generated content and (ii) providers of search engines with extra-territorial reach.
The Bill introduced:
- Provisions on "legal but harmful" content
- The Initial Draft Bill distinguished between duties and obligations on service providers in respect of content that is “harmful to children” and that which is “harmful to adults” and defined harmful content in both contexts.
- Legal “duties of care”
- Duties of care were imposed on providers of internet services that allow users to share user-generated content (user-to-user services) and providers of search engines (search services).
- These duties included undertaking illegal content risk assessments, protecting children’s online safety and that of vulnerable users, as well as duties regarding freedom of expression.
- A tiered, risk-based approach to obligations
- Larger / higher risk online platforms and services were subject to greater regulation, with three key categories established, ranging from Category 1 "high reach" user-to-user services (such as Facebook, Instagram, TikTok and YouTube), to Category 2A and 2B smaller businesses meeting certain threshold conditions. These thresholds will be set out in regulations made by the Secretary of State.
- Extra-territorial reach:
- The Initial Draft Bill applied to regulated services that had "links with the UK" – i.e. those services that have a significant number of users in the UK, target the UK market or can be used in the UK by individuals in circumstances where there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK. This is regardless of whether the regulated service is based in the UK itself.
- Steep fines
-
- These statutory duties were backed by fines of up to £18 million or 10% of annual worldwide revenue for non-compliance, alongside a range of other enforcement powers granted to Ofcom which included powers to block access to sites.
For further information on the 13 May 2021 draft of the Bill, please see our earlier blog here. All our previous coverage of the Bill can also be found here.
What has changed since the Bill was first published?
In response to criticism from stakeholders (considered in further depth below), the Government made changes to strengthen and refine the Initial Draft Bill (the "Current Draft Bill"). In a policy paper dated 18 January 2023, the Government listed eight key changes that had been made, which include:
1. Introduction of duty requiring services to minimise fraudulent ads
An explicit new duty was introduced requiring Category 1 and Category 2A services to take action to minimise the likelihood of fraudulent adverts being published on their service. With the intention of making it more challenging for fraudsters to share online scams online, Category 1 and Category 2A service providers could face enforcement action should they fail to take appropriate action.
Sections 33 and 34 of the Current Draft Bill require that service providers:
-
- prevent individuals from encountering fraudulent adverts;
- minimise the length of time fraudulent adverts are on the service; and
- when made aware of such content, remove it quickly.
2. Removal of the "legal but harmful" concept from primary legislation
Previous drafts of the Bill included a definition of "legal but harmful" content and introduced duties on providers of user-to-user services to remove such content. This proved controversial: Members of Parliament, free-speech groups, and the public objected to the concept of Government-defined "legal but harmful" speech. There was concern that companies would be incentivised to "over-remove" legal but potentially harmful content, in turn further jeopardising freedom of speech online.
In response to this criticism, the Government announced in November that:
"Any incentives for social media firms to over-remove people’s legal online content will be taken out of the Online Safety Bill. Firms will still need to protect children and remove content that is illegal or prohibited in their terms of service, however the Bill will no longer define specific types of legal content that companies must address."
The Current Draft Bill therefore removes duties in relation to content which is lawful but potentially harmful for adults. These duties have been replaced with an increased focus on transparency, and a requirement for Category 1 service providers to provide greater power for adult users, in particular, to determine what content they see. The Government has introduced what it refers to as the "triple shield", which requires online platforms to:
- remove illegal content;
- remove content prohibited by the platform's own terms and conditions; and
- provide users with tools to tailor the content they see and filter out potentially harmful content (children will automatically have this filter applied to their accounts).
More specifically the safety duties in respect of illegal content build on the principle of safety by design. Service providers' duties include taking or using proportionate measures relating to the design or operation of the service to:
- prevent individuals from encouraging "priority" illegal content by using service;
- mitigate the risk of the service being used to commit or facilitate a priority offence (see below) identified by most recent illegal content risk assessment; and
- mitigate the risk of harm to individuals, as identified by the most recent illegal content risk assessment.
Other duties include using proportionate systems and processes designed to: minimise the length of time for which any priority illegal content is present, and where the provider becomes aware of the presence of illegal content, swiftly take it down.
Crucially, the third duty requires that legal but harmful content must be filtered out of children's online feeds. Priority categories of legal but harmful content for children will be set out in secondary legislation, alongside related Ofcom guidance.
Content which adults should be empowered to filter out is categorised in the Bill and includes material:
- relating to suicide and self-harm;
- relating to eating disorders; or
- which abuses a range of protected characteristics (such as race, gender, and religion).
Ofcom will in due course produce guidance on what is and is not likely to fall within these categories.
The Government views that the "triple shield" meets the concerns raised regarding the Initial Draft Bill whilst preserving freedom of speech, given:
- companies will not be able to remove content or ban users unless content violates the platform's terms of service or is illegal; and
- users must have an effective right of appeal when banned or restricted.
3. Inclusion of priority offences in primary legislation
Social media companies must actively remove all types of illegal content identified in the Bill and stop children and adults from seeing it, where the term "illegal content" includes "priority offences". The inclusion of priority offences in primary legislation will mean, according to the Government, that "Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content, rather than waiting for the offences to be made a priority in secondary legislation".
The Current Draft Bill includes an extensive list of illegal content that platforms must remove, ranging from sexual abuse material, to fraud and violence, to people smuggling and suicide. This incorporates recommendations from the Law Commission, including criminalising the act of intentionally sending flashing images to a person with epilepsy.
4. Duty to protect news publisher content
While seeking to reduce the prevalence of harmful content, the Current Draft Bill also introduces obligations to comply with 25 duties protecting "news publisher content", defined as:
- content generated by a "recognised news publisher"; or
- content uploaded by a recognised news publisher which is shared or reproduced in full.
Obligations for platforms with respect to news publisher content include obligations to:
- contact news publishers prior to any removal action or introducing warning labels;
- notify a recognised news publisher and offer a right of appeal before taking action against its account or content, unless it constitutes a relevant offence under the Bill or the platform would incur criminal or civil liability by hosting said content; and
- assess the impact the regime has on the availability and treatment of news publisher content and journalistic content (this applies to Category 1 services only).
5. Further changes
Further changes to the Bill include:
- a requirement on all service providers that publish or display pornographic content on their services to prevent children from accessing this content;
- the removal of the requirement to defer the power to bring in criminal sanctions for failures to comply with information notices (this will instead be introduced as soon as possible after Royal Assent); and
- ensuring that all categories of harmful content accessed by adults will be voted on by Parliament (these categories will subsequently be set out in secondary legislation).
What is the likely impact of these changes?
On 18 January 2023, the Department for Digital, Culture, Media & Sport ("DCMS") published a policy paper outlining the expected impact of the changes made to the Bill, which include:
- Updates to Terms of Service: The Current Draft Bill requires Category 1 services to set terms of service in relation to the restriction or removal of user-generated content which contain sufficient detail for users to understand what type of content is permitted on the platform. The DCMS expects that this may result in additional costs to these platforms, as they are likely to need to amend their terms of service to be clearer and more detailed. However, the DCMS notes that these costs are likely to be lower than if the more onerous legal but harmful duties were still in place.
- More stringent content moderation measures: The updated transparency, accountability and freedom of expression duties in the Current Draft Bill are broader in scope than those in the Initial Draft Bill, as they apply to all content which services remove or restrict access to, instead of a defined list of priority harmful content to adults. This is likely to result in affected online platforms needing to deploy enhanced content moderation measures for compliance.
- Enhanced impact assessments on freedom of expression and privacy: The DCMS notes that the introduction of the requirement for Category 1 services to notify news publishers and offer an appeal route before taking action against its account or content may result in an enhanced need to conduct impact assessments on freedom of expression and privacy.
- Impacts on the criminal justice system: The introduction of new offences relating to sending flashing images to people with epilepsy is expected by the DCMS to result in additional costs to law enforcement and the criminal justice system. However, this is balanced with the expected impact of providing greater protection to victims and the community.
In the policy paper the DCMS acknowledged that some of these changes are likely to result in additional costs to online platforms (compared to the final impact assessment which was published when the Bill was first introduced into Parliament), as providers seek to navigate the updated requirements. The Government also intends to improve its cost estimate of the likely impact of the Bill through further engagement with businesses as part of the pre-legislative scrutiny process and will set these out in a detailed enactment impact assessment which will be published following Royal Assent.
Next steps
Following a variety of delays, the Bill was introduced into the House of Lords and received its Second Reading on 1 February 2023. Please refer to our blog "The Online Safety Bill: Crystal ball gazing on the road ahead – what can we expect?" where we provide an insight into what we might expect for the Bill as it moves through the House of Lords.
For further background, please refer to our previous blogs here, here and here.
Key contacts
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.