Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
From ‘hands off’ to much more interventionist, the last few years have seen proliferating efforts by governments, regulators and courts across the globe to regulate the moderation of online content.
In Australia, the government has convened Parliamentary hearings, regulators have updated their strategic priorities and taken enforcement action and new laws have been proposed or enacted.
The intensity of this regulatory activity is increasing with each year. And we anticipate 2022 to be no different, especially given next year’s wide-ranging Parliamentary inquiry into the impacts of online harms on Australians and the introduction of a new law to unmask anonymous online trolls making harmful defamatory comments.
Though there has been an effort to harmonise some of the existing legislation in this space, particularly through the Online Safety Act, the legal and regulatory framework is still fragmented. This – along with potentially conflicting values driving policymaking (such as safety, speech and privacy) – make it challenging for companies to adopt content moderation practices and procedures that will withstand government, regulatory and public scrutiny.
Key questions companies should be asking themselves to prepare for such scrutiny when thinking about moderating content are:
|
Misinformation and disinformationAs set out in the Australian Code of Practice on Disinformation and Misinformation, this type of content includes verifiably false, misleading or deceptive content propagated on digital platforms and reasonably likely to cause harm to democratic political processes and public goods, such as public health. |
||
Online harmsPrincipally regulated by the Online Safety Act 2021, which comes into effect on 23 January 2022. Under the Act, online harmful content includes cyberbullying and abuse material, non-consensual intimate images, restricted material and material depicting abhorrent violent conduct. |
|||
Misleading online advertisingThis type of content includes representations made online, such as through search or display advertising, that mislead or deceive, or likely to do so, (intentionally or not) reasonable members of a certain class of the public. |
When it comes to regulating misinformation and disinformation, the Australian government has been relatively hands off in its approach. In part, this may be because of the vexed issues of responsibility – does the government step in, define content considered misinformation and disinformation, prescribe its removal and therefore face inevitable criticisms of censorship as well as legal challenges? Or should it leave moderation to platforms and therefore leave regulating issues of democratic importance, such as freedom of speech, to private companies with broad reach?
To date, the government’s approach has been to call on digital technology companies to self-regulate under the industry code, the Australian Code of Practice on Disinformation and Misinformation. The Code takes a harms-based, flexible and proportionate approach to content moderation. It focuses upon ensuring signatories are transparent in how they achieve the Code’s core objective of safeguarding Australian users against harms caused by misinformation and disinformation.
In doing so, the Code supports the range of actions signatories take to address these harms, including:
The Code, and the government’s approach, has not been without criticism. This includes criticism from members of the government itself, who have questioned whether the Code goes far enough. And senior members, such as the Minister for Communication, have asserted the government may regulate directly if it considers the Code to be ineffective, potentially following the European Union who moved from a voluntary to a more mandatory co-regulatory model for their Code of Practice on Disinformation.
Unlike misinformation and disinformation, the Australian government has taken the legislative pathway for other online harmful conduct and content, such as cyberbullying and abuse material, sharing of non-consensual images, refused and restricted classified materials and materials depicting abhorrent violent conduct.
This year, it passed the Online Safety Act, which updates Australia’s online safety framework by amending, or repealing and replacing, previous laws, such as the Enhancing Online Safety Act. The Act empowers the eSafety Commissioner to take a range of actions to address online harms, and to do so against a range of internet-related companies, including social media platforms, messaging companies, internet service providers and providers of app stores, web browsers and web hosting services.
Among other provisions, the Act establishes a takedown regime, requiring companies to remove content that has been the subject of a user complaint. If they do not comply within 48 hours of receiving the complaint, the Commissioner can issue a notice requiring its removal within 24 hours.
Whilst social media platforms may be used to such notices, other companies, such as hosting companies or app store providers, may not. Furthermore, even social media platforms may not be used to other powers given to the Commissioner, including strengthened information gathering and investigatory powers.
The government is also currently consulting on whether the Act should establish a more proactive requirement for service providers to take reasonable steps to ensure safe use and minimise unlawful or harmful content or conduct. Some of these steps would already be taken by providers, including having processes to detect, moderate, report and remove content or conduct, expecting employees to promote online safety and assessing safety risk for products and services from design to post-deployment. However, there are also more novel, and potentially technically difficult, steps to take, such as detecting harmful content or conduct on encrypted services.
Turning to enforcement activity, both ASIC and the ACCC have focused upon imposing higher standards in online advertising through court action:
Both regulators are also dealing with online scam activity. For ASIC, it is dealing with an increase in ‘pump and dump’ campaigns coordinated and promoted on social media. It has expanded its supervision of social media and messaging services, including meeting with moderators of Facebook and Reddit groups to discuss how they monitor and moderate content. It has also tried to disrupt campaigns by entering Telegram chats to warn traders that coordinated pump activity is illegal and they have access to trader identities.
The ACCC is dealing with an increase in scam online advertisements, such as fake celebrity endorsements of products that feature as online advertisements or promotional stories on social media. Though there is legal precedent providing internet intermediaries like digital platforms with protection from liability for misleading advertisements on their platforms, companies operating in this space should take care to not endorse or adopt misleading representations made by users. This could be achieved by having systems in place for receiving and responding to complaints about misleading content or conduct, as well as having appropriate exclusions in terms of service or related documents about potentially misleading statements made by users or other third parties.
Despite the raft of new legislation, we are unlikely to see a slowdown in efforts to police the internet. In the near term, the Australian government has flagged changes to defamation law, including through a new law unmasking anonymous online trolls, as well as the expansion of the Online Safety Act through adoption of the basic online safety expectations. We also expect there to be an increasingly blurred line between national security concerns and content moderation practices, particularly regarding encrypted messaging services.
The breadth and significance of this reform agenda means industry must continue to engage with the government and regulators to ensure any proposed reform is proportionate and effective.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs