Follow us


On 17 January 2024, the Australian Government published its much-anticipated interim response to the Safe and Responsible AI consultation held in 2023. The interim response outlines how the Government intends to ensure that AI is designed, developed and deployed safely and responsibly in this market, namely:

  1. Government will consult on the case for a new regulatory framework around "high-risk" AI applications including safety guardrails for AI;
     
  2. the National AI Centre will work with industry to develop a voluntary AI Safety Standard; and 
     
  3. work with industry to consider the merits of a voluntary labelling and watermarking scheme for AI-generated materials,

as well as considering the adequacy of existing technology neutral laws in relation to AI-specific risks as part of parallel law reform reviews. 

Although we do not yet know the final outcomes, the general approach of combining targeted obligations on high-risk AI with lighter touch voluntary “soft law” for less risky uses, strikes a good balance to encourage the uptake of AI in Australia whilst protecting consumers. 

In the international context, the Government recognises the need to consider specific obligations for the development, deployment, and use of high-powered general-purpose AI, considered to be ‘frontier AI’, and will seek to align with, and influence, international developments in this area. Whilst we recognise the benefit of coherence with laws of other jurisdictions for regulating AI, Australian economic and policy objectives may differ from other major AI jurisdictions, for example in the balance between developers and users of AI.  

The headline proposal in the interim response is consultation on options to introduce regulations establishing safety guardrails for high-risk AI use cases. The rationale is that in high-risk use cases, it can be difficult or impossible to reverse any harm caused by the use of the AI. As such, safeguards will apply to the development, deployment and on-going use of AI in high-risk areas to identify, measure and mitigate risks to the community. The proposed guardrails will focus on testing, transparency and accountability and the Government has set out some initial proposals (see below). The Government also proposes to establish a temporary expert advisory body on AI to assess options for AI guardrails. 

The practical impact of this proposal remains unclear given the current uncertainty of the scope of the definition of high-risk AI uses. In the earlier consultation paper two examples of high-risk uses were given: robots used in surgery and autonomous vehicles. The scope of high-risk AI will be considered during the Government’s consultation, together with development of guidelines setting out when AI use is high risk.  Accordingly, businesses that are contemplating using AI in areas of potential high risk should actively consider participation in the upcoming consultation that will shape these future regulations. 

These proposals come on top of, and are intended to dovetail AI considerations into, existing regulatory reform work the Government is doing across a number of other areas. In particular, reforms to the Privacy Act, a review of the Online Safety Act 2021, including new laws to address misinformation, and efforts to increase transparency and integrity of automated decisions in the wake of the Robodebt Royal Commission report. The interim response does not provide much detail on the progress of these reviews, although we anticipate that amendments will take account of feedback provided during last year’s consultation. What is also notable in the interim response is that the temporary expert advisory body is not expressly considering updates to existing laws as part of its remit. 

Testing

  • internal and external testing of AI systems before and after release, including, for example, by independent experts
  • sharing information on best practices for safety
  • ongoing auditing and performance monitoring of AI systems
  • cyber security and reporting of security-related vulnerabilities in AI systems.

Transparency

  • users/customers knowing when an AI system is used and/or that content is AI generated, including labelling or watermarking
  • public reporting on AI system limitations, capabilities, and areas of appropriate and inappropriate use
  • public reporting on data a model is trained on and sharing information on data processing and testing.

Accountability

  • having designated roles with responsibility for AI safety 
  • requiring training for developers and deployers of AI products in certain settings.

Key contacts

Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Peter Jones photo

Peter Jones

Partner, Sydney

Peter Jones
Susannah Wilkinson photo

Susannah Wilkinson

Director, Generative AI (Digital Change), Brisbane

Susannah Wilkinson
Alex Lundie photo

Alex Lundie

Senior Associate, Melbourne

Alex Lundie
Katherine Gregor photo

Katherine Gregor

Partner, Melbourne

Katherine Gregor
Kwok Tang photo

Kwok Tang

Partner, Sydney

Kwok Tang
Raymond Sun photo

Raymond Sun

Solicitor, Sydney

Raymond Sun

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Australia Technology, Media and Entertainment, and Telecommunications Digital Business Technology, Media and Telecommunications Tech Regulation Artificial Intelligence AI and Emerging Technologies Julian Lincoln Peter Jones Susannah Wilkinson Alex Lundie Katherine Gregor Kwok Tang Raymond Sun