Follow us


Currently, there is no AI-specific legislation or regulation in place in Australia. However, there are voluntary frameworks and principles to assist organisations with self-regulation, and proposals for more specific regulation including mandatory guardrails that would apply to the use of AI in high-risk settings. Various technology-neutral Australian laws may also apply to entities who develop and use AI (for example, relating to data protection and privacy, online safety, anti-discrimination, copyright law, consumer rights, and corporate governance), some of which are the subject of reform initiatives aimed at strengthening the application of these laws to AI technology.

Australia’s response to AI includes various voluntary frameworks and guidance for AI self-regulation.

For example, Australia’s eight voluntary AI Ethics Principles were adopted in 2019 and are designed to help:

  • achieve safer, more reliable and fairer outcomes for all Australians;
  • reduce the risk of negative impact on those affected by AI applications; and
  • enable businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.

On 1 June 2023, the Australian Government released a discussion paper entitled Safe and Responsible AI in Australia seeking views on how it could mitigate any potential risks of AI and support safe and responsible AI practices. The Government published its Interim Response on 17 January 2024 acknowledging that existing laws and the current regulatory framework were likely inadequate and setting out the actions it proposed to take as a result. In particular, the Government confirmed that it would consult on new mandatory guardrails for organisations developing and deploying AI systems in high-risk settings, while ensuring the use of AI in low-risk settings could continue to flourish largely unimpeded, and that it would consider further opportunities to strengthen existing laws to address risks and harms from AI.

The Government also acknowledged that the public expects the Government to be an exemplar of safe and responsible adoption and use of AI technologies, and as a result, subsequently implemented its Policy for the Responsible Use of AI in Government, which applies to non-Corporate Commonwealth entities and took effect in September 2024.

Helpful resources include the Australian Government’s website on Artificial Intelligence, which contains links to various key publications including the AI Ethics Principles and the proposed mandatory guardrails, and the website maintained by the Digital Platform Regulators Forum, the latter being a collaboration of four Australian regulators (the ACCC, ACMA, OAIC and the eSafety Commissioner), which contains various working papers relating to AI.

Currently, there is no AI-specific regulation or legislation in place in Australia.

However, in September 2024, the Australian Government released a proposal paper for introducing mandatory guardrails for AI in high-risk settings. The proposal paper focuses on “high risk AI” and sets out ten proposed guardrails around the development and deployment of high-risk AI in Australia and regulatory options for mandating the guardrails (for example, adapting existing legislation or creating new frameworks).

The proposed guardrails would require relevant entities to:

  • establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  • establish and implement a risk management process to identify and mitigate risks;
  • protect AI systems, and implement data governance measures to manage data quality and provenance;
  • test AI models and systems to evaluate model performance and monitor the system once deployed;
  • enable human control or intervention in an AI system to achieve meaningful human oversight;
  • inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  • establish processes for people impacted by AI systems to challenge use or outcomes;
  • be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  • keep and maintain records to allow third parties to assess compliance with guardrails; and
  • undertake conformity assessments to demonstrate and certify compliance with the guardrails.

 

At the same time, the Government released the Voluntary AI Safety Standards (VAISS), which largely replicate the proposed mandatory guardrails, with a view to guiding Australian organisations on how to safely and responsibly develop and deploy AI systems in line with existing international standards and position themselves for a smoother transition once the mandatory requirements come into effect.

Read our article on the proposed mandatory guardrails and the voluntary standards here.

The Government is consulting (VAISSv2 Consultation) on the next version of the VAISS to (i) extend the standard to include additional practices and guidance for AI system developers, (ii) provide guidance on labelling and watermarking of AI content and (iii) provide enhanced procurement guidance.

From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.

In Australia, it is unlawful to discriminate on the basis of a number of protected attributes including age, disability, race, sex, intersex status, gender identity and sexual orientation in certain areas of public life, pursuant to federal legislation including the Age Discrimination Act 2004 (Cth), Disability Discrimination Act 1992 (Cth), Racial Discrimination Act 1975 (Cth), Sex Discrimination Act 1984 (Cth) and Australian Human Rights Commission Act 1986 (Cth).

Under the Fair Work Act 2009 (Cth), if an employee can show that they possess a relevant attribute and that adverse action was taken against them, the burden shifts to the employer to prove that no unlawful bias or discrimination played a role in the decision to take the action. The risk of discrimination may be heightened in circumstances where an AI system has been used to assist with decision making, by virtue of algorithmic bias resulting from an overrepresentation or underrepresentation of certain demographics in the data used to train the system, and this reverse onus of proof may be difficult to discharge due to the “black box” nature of many advanced AI systems.

In December 2023, the Australian Government established a Copyright and Artificial Intelligence Reference Group to consider the copyright challenges emerging from AI. In September 2024, the Government published a report setting out the findings from the Group’s first information gathering exercise relating to the use of copyright material as inputs into AI systems to help inform the Government’s policy response to AI.

While existing intellectual property laws may apply in AI-related contexts, there are some limits to their application. For example:

  • It is possible that copyright protections may apply to AI-related works under the Copyright Act 1968 (Cth). However, certain criteria must be met, such as the requirement for the work to be an “original” creation of a qualified person. In the case of AI-generated works, it may be challenging to establish the level of independent intellectual effort required to meet this criterion. A ‘qualified person’ under the Copyright Act is an Australian citizen or person resident in Australia (but can be extended to other countries in some circumstances). This will exclude AI-generated content.
  • It is possible that the use of an AI system could give rise to copyright infringement, for example by reproducing a substantial part of a copyright-protected work either as part of the training of the system, or by the AI-generated output.
  • The Full Court of the Federal Court has held only a natural person can be named as an inventor for the purposes of a patent application under the Patents Act 1990 (Cth).

Read our article on recent updates and developments relating to IP in AI here.

The Australian Competition and Consumer Commission (ACCC) – Australia’s national competition, consumer, fair trading and product safety regulator – confirmed in its Corporate Plan for 2024-25 that responding effectively to changes in the digital economy – including the risk of AI – remains a central focus.

On 15 October 2024, the Australian Government announced a review of the Australian Consumer Law (ACL) component of the Competition and Consumer Act 2010 (Cth) to explore the application of the ACL in relation to AI-enabled goods and services, which it defines as goods and services which involve a consumer directly interacting with an AI system.

The ACL is a principles-based framework and, while it may apply to AI-enabled goods and services in its current form, the Government identifies certain challenges that may apply in the application of the ACL in such contexts and seeks views on whether the ACL remains suitable to protect consumers who use AI and support the safe and responsible use of AI by businesses.

On 15 November 2024, the Australian Government released a further consultation paper seeking feedback on its proposal to incorporate a general prohibition relating to unfair trading practices into the ACL. The general prohibition would capture conduct that “unreasonably distorts or manipulates, or is likely to unreasonably distort or manipulate, the economic decision-making or behaviour of a consumer, and causes, or is likely to cause, material detriment (financial or otherwise) to the consumer”. The Government notes in its consultation paper that these ‘deceptive patterns’ often occur on a large scale using modern technology (such as AI-enabled tools). 

Both consultations processes are now closed but the Government is yet to publish its response.

Read our article on the ACL reform initiatives here.

In August 2023, Australia’s eSafety Commissioner published a statement setting out its position in relation to Generative AI, including regulatory challenges and approaches and providing advice to users. More recently, the eSafety Commissioner published a submission which broadly supported the Government’s proposed mandatory guidelines for high-risk AI and highlighted some examples of existing guardrails within eSafety’s regulatory powers such as industry codes and standards registered under the Online Safety Act 2021 (Cth).

The eSafety Commissioner also supported the Criminal Code Amendment (Deepfake Sexual Material) Act 2024 which passed in August 2024 to amend the criminal code to impose serious criminal penalties on those who share sexually explicit material without consent. This includes material that is digitally created using artificial intelligence or other technology.

The Australian Communications and Media Authority (ACMA) – the authority responsible for the regulation of broadcasting, radiocommunications and telecommunications in Australia – also published a submission broadly in support of the Government’s proposed mandatory guardrails for high-risk AI, but makes clear in the submissions that many of the guardrails set out the types of actions that the ACMA already expects its regulated entities to be taking pursuant to legislation including the Broadcasting Services Act 1992, the Radiocommunications Act 1992, the Telecommunications Act 1997 and the Spam Act 2003.

The Australian Securities and Investments Commission – Australia's integrated corporate, markets, financial services and consumer credit regulator – has emphasised, including by way of a Report published in October 2024 and comments made by ASIC Chair Joe Longo, that many of the existing obligations that apply to the entities that it regulates are technology neutral such that entities need to ensure their use of AI does not breach any of these provisions. This includes, for example:

  • Obligations on Australian financial services licensees under the Corporations Act 2010 (Cth) and on Australian credit licensees pursuant to the National Consumer Credit Protection Act 2009 (Cth), including the obligation to provide services efficiently, honestly and fairly and have adequate systems in place to ensure compliance.

Directors’ duties under the Corporations Act 2010 (Cth), including the duty to act in good faith and with a reasonable degree of care and diligence. The obligation not to engage in misleading or deceptive conduct or make false or misleading representations as prohibited by the Corporations Act 2010 (Cth) and Australia Securities and Investments Commissions Act 2001 (Cth), is relevant where an entity engages in “AI washing” (for example, by overstating their use of AI).

Court protocols and judicial guidelines have been released in various Australian jurisdictions providing guidance on the responsible use of generative AI in court and tribunal proceedings. While the guidelines vary by jurisdiction, they all highlight risks such as inaccuracies resulting from 'hallucinations' produced by AI systems and data privacy concerns, and warn against the improper use of such systems in proceedings.

  • The Supreme Court of New South Wales Practice Note, which has been adopted by several other Courts in New South Wales, outlines acceptable uses of generative AI by litigants and judges and prohibits the use of generative AI in certain circumstances including in generating the content of affidavits, witness statements and character references.
  • The Guidelines published by the Supreme Court of Victoria, and adopted by the County Court, set out various principles for the use of AI tools by litigants encouraging transparency where AI tools are used and make clear the circumstances in which such tools may be used by the judiciary.
  • The Guidelines published by the Queensland Courts were developed to assist non-lawyers who represent themselves or others in court and tribunal proceedings and encourage users to be aware of ethical issues associated with the use of generative AI tools.
  • Bar associations and law societies in various Australian jurisdictions have also issued guidance notes.

Explore the latest landmark rulings as AI-related disputes make their way through the courts.

Concluded

Commissioner of Patents v Thaler (2022) 289 FCR 45: In 2019, Dr Thaler applied for a patent naming an AI system as the inventor, stating in the application that the “[t]he invention was autonomously generated by an artificial intelligence”. The Full Federal Court held that only a natural person can be named as an inventor for the purposes of a patent application under the Patents Act 1990 (Cth). While Dr Thaler subsequently applied for special leave from the High Court to appeal this decision, the application was denied.

Re Accenture Global Solutions Ltd (2022) 175 IPR 266: Accenture Global Solutions submitted a patent application for an alleged invention relating to the application of AI to automate incident management within an organisation based on requirements with respect to priority of completion and availability of personnel. Upon examination pursuant to the Patents Act 1990 (Cth), it determined that the claims of the application did not define a manner of manufacture, but rather, a computer-implemented business scheme or plan, which is unpatentable. As a result, the application was refused.

Australian Competition and Consumer Commission v Trivago N.V. (2020) 142 ACSR 338; Australian Competition and Consumer Commission v Trivago N.V. (No 2) (2022) 159 ACSR 353: Trivago engaged in an advertising campaign conveying that its website, which aggregated deals offered by other hotel booking websites, would identify the cheapest rates available for a hotel room. However, the AI-based algorithm that Trivago used to display a ‘top position offer’ was biased towards the hotel booking website that paid Trivago the highest fee and often did not display the cheapest rate. The Federal Court found this conduct to have contravened ss 29(1)(i) and 34 of ACL and ordered Trivago to pay penalties totalling $44.7 million.

Clearview AI Inc v Australian Information Commissioner [2023] AATA 1069: Clearview AI provides a facial recognition service to law enforcement agencies designed to assist them to identify and locate victims and suspects in criminal investigations. To provide the service, Clearview AI collects images of individuals’ faces from publicly available sources. In 2021, the OAIC determined that Clearview AI had breached various of the Australian Privacy Principles set out in Schedule 1 of the Privacy Act 1988 (Cth). Clearview AI sought to challenge the OAIC’s determination in the Administrative Appeals Tribunal (AAT) on various grounds, including on the basis that it was not bound by the Privacy Act because it is a foreign corporation without an ‘Australian link’. The AAT held that Clearview AI carried on business in Australia such that the Privacy Act did apply to it, and also held that Clearview AI had breached Australian Privacy Principles 1.2 and 3.3 on the basis that it had collected sensitive information about individuals without their consent and failed to take reasonable steps to comply with the Australian Privacy Principles.


Key contacts

Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Kwok Tang photo

Kwok Tang

Partner, Sydney

Kwok Tang
Katherine Gregor photo

Katherine Gregor

Partner, Melbourne

Katherine Gregor
Peter Jones photo

Peter Jones

Partner, Sydney

Peter Jones
Aaron White photo

Aaron White

Partner, Head of TMT Asia, Brisbane

Aaron White
Nataly Adams photo

Nataly Adams

Senior Associate, Sydney

Nataly Adams
Camille Tewari photo

Camille Tewari

Senior Associate, Melbourne

Camille Tewari
Alex Lundie photo

Alex Lundie

Senior Associate, Melbourne

Alex Lundie

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Sydney Perth Brisbane Melbourne Technology, Media and Entertainment, and Telecommunications Emerging Technology Artificial Intelligence Technology, Media and Telecommunications Artificial Intelligence Tech Regulation Digital Transformation Emerging Technologies AI and Emerging Technologies Julian Lincoln Kwok Tang Katherine Gregor Peter Jones Aaron White Nataly Adams Camille Tewari Alex Lundie