Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
Currently, there is no AI-specific legislation or regulation in place in Australia. However, there are voluntary frameworks and principles to assist organisations with self-regulation, and proposals for more specific regulation including mandatory guardrails that would apply to the use of AI in high-risk settings. Various technology-neutral Australian laws may also apply to entities who develop and use AI (for example, relating to data protection and privacy, online safety, anti-discrimination, copyright law, consumer rights, and corporate governance), some of which are the subject of reform initiatives aimed at strengthening the application of these laws to AI technology.
Australia’s response to AI includes various voluntary frameworks and guidance for AI self-regulation.
For example, Australia’s eight voluntary AI Ethics Principles were adopted in 2019 and are designed to help:
On 1 June 2023, the Australian Government released a discussion paper entitled Safe and Responsible AI in Australia seeking views on how it could mitigate any potential risks of AI and support safe and responsible AI practices. The Government published its Interim Response on 17 January 2024 acknowledging that existing laws and the current regulatory framework were likely inadequate and setting out the actions it proposed to take as a result. In particular, the Government confirmed that it would consult on new mandatory guardrails for organisations developing and deploying AI systems in high-risk settings, while ensuring the use of AI in low-risk settings could continue to flourish largely unimpeded, and that it would consider further opportunities to strengthen existing laws to address risks and harms from AI.
The Government also acknowledged that the public expects the Government to be an exemplar of safe and responsible adoption and use of AI technologies, and as a result, subsequently implemented its Policy for the Responsible Use of AI in Government, which applies to non-Corporate Commonwealth entities and took effect in September 2024.
Helpful resources include the Australian Government’s website on Artificial Intelligence, which contains links to various key publications including the AI Ethics Principles and the proposed mandatory guardrails, and the website maintained by the Digital Platform Regulators Forum, the latter being a collaboration of four Australian regulators (the ACCC, ACMA, OAIC and the eSafety Commissioner), which contains various working papers relating to AI. |
Currently, there is no AI-specific regulation or legislation in place in Australia.
However, in September 2024, the Australian Government released a proposal paper for introducing mandatory guardrails for AI in high-risk settings. The proposal paper focuses on “high risk AI” and sets out ten proposed guardrails around the development and deployment of high-risk AI in Australia and regulatory options for mandating the guardrails (for example, adapting existing legislation or creating new frameworks).
The proposed guardrails would require relevant entities to:
|
At the same time, the Government released the Voluntary AI Safety Standards (VAISS), which largely replicate the proposed mandatory guardrails, with a view to guiding Australian organisations on how to safely and responsibly develop and deploy AI systems in line with existing international standards and position themselves for a smoother transition once the mandatory requirements come into effect.
Read our article on the proposed mandatory guardrails and the voluntary standards here.
The Government is consulting (VAISSv2 Consultation) on the next version of the VAISS to (i) extend the standard to include additional practices and guidance for AI system developers, (ii) provide guidance on labelling and watermarking of AI content and (iii) provide enhanced procurement guidance.
From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.
Explore the latest landmark rulings as AI-related disputes make their way through the courts.
Commissioner of Patents v Thaler (2022) 289 FCR 45: In 2019, Dr Thaler applied for a patent naming an AI system as the inventor, stating in the application that the “[t]he invention was autonomously generated by an artificial intelligence”. The Full Federal Court held that only a natural person can be named as an inventor for the purposes of a patent application under the Patents Act 1990 (Cth). While Dr Thaler subsequently applied for special leave from the High Court to appeal this decision, the application was denied.
Re Accenture Global Solutions Ltd (2022) 175 IPR 266: Accenture Global Solutions submitted a patent application for an alleged invention relating to the application of AI to automate incident management within an organisation based on requirements with respect to priority of completion and availability of personnel. Upon examination pursuant to the Patents Act 1990 (Cth), it determined that the claims of the application did not define a manner of manufacture, but rather, a computer-implemented business scheme or plan, which is unpatentable. As a result, the application was refused.
Australian Competition and Consumer Commission v Trivago N.V. (2020) 142 ACSR 338; Australian Competition and Consumer Commission v Trivago N.V. (No 2) (2022) 159 ACSR 353: Trivago engaged in an advertising campaign conveying that its website, which aggregated deals offered by other hotel booking websites, would identify the cheapest rates available for a hotel room. However, the AI-based algorithm that Trivago used to display a ‘top position offer’ was biased towards the hotel booking website that paid Trivago the highest fee and often did not display the cheapest rate. The Federal Court found this conduct to have contravened ss 29(1)(i) and 34 of ACL and ordered Trivago to pay penalties totalling $44.7 million.
Clearview AI Inc v Australian Information Commissioner [2023] AATA 1069: Clearview AI provides a facial recognition service to law enforcement agencies designed to assist them to identify and locate victims and suspects in criminal investigations. To provide the service, Clearview AI collects images of individuals’ faces from publicly available sources. In 2021, the OAIC determined that Clearview AI had breached various of the Australian Privacy Principles set out in Schedule 1 of the Privacy Act 1988 (Cth). Clearview AI sought to challenge the OAIC’s determination in the Administrative Appeals Tribunal (AAT) on various grounds, including on the basis that it was not bound by the Privacy Act because it is a foreign corporation without an ‘Australian link’. The AAT held that Clearview AI carried on business in Australia such that the Privacy Act did apply to it, and also held that Clearview AI had breached Australian Privacy Principles 1.2 and 3.3 on the basis that it had collected sensitive information about individuals without their consent and failed to take reasonable steps to comply with the Australian Privacy Principles.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs