Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
On 5 September 2024 the Australian Federal Government (the Government) released two key documents as part of its broader agenda to promote safe and responsible use of artificial intelligence (AI) in Australia:
The Proposal sets out:
The Proposal is now open to public consultation, closing 5pm AEST on Friday 4 October 2024.
The Voluntary Standards provide practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI. These standards may be used immediately, and are intended to “give businesses certainty ahead of implementing mandatory guardrails” (Minister press release).
These documents follow the Government’s latest interim response to the Safe and Responsible AI in Australia discussion paper (Initial Consultation) in January 2024, which called for the development of a regulatory environment that builds community trust and promotes AI adoption. Other key recent initiatives to note include the National AI Assurance Framework (which provides for a nationally consistent approach to AI assurance across federal, state and territory governments) and the Policy For Responsible Use of AI in Government (which requires federal agencies to appoint accountable officials to implement AI policies and provide transparency statements outlining their approach to AI adoption), though they target the public sector.
Risk-based approach and “high-risk AI”
The Proposal focuses on “high-risk AI” as the subject of the proposed mandatory guardrails.
This particular approach is based on the Government’s observations (as advised by an expert advisory group set up in the Initial Consultation) that “AI has characteristics, as distinct from other types of software programs, that warrant a specific regulatory response” and that the various existing and new risks amplified by AI (e.g. bias and discrimination, misinformation, privacy breaches, etc) call for a “risk-based approach, with a focus on ex ante (preventative) measures”. The Proposal notably draws upon examples seen in the European Union and Canada (Proposal, pg 11-12, 16).
In terms of defining “high-risk AI”, the Proposal suggests two broad categories where the mandatory guardrails would apply:
In relation to the first category, the Proposal sets out a principles-based definition as follows:
In designating an AI system as high-risk due to its use, regard must be given to:
The severity and extent of impact of the identified risks will be weighed up to determine if the system is in fact ‘high risk’.
On the second category, the Proposal does not propose a separate risk criteria or threshold for GPAI models, but rather suggests the application of the mandatory guardrails to the development and deployment of all GPAI models (as distinct from the specific application or use of a GPAI model, as may be covered in category one above) given that they can be applied in contexts they were not originally designed for (i.e. unforeseeable risks). That said, the Proposal does acknowledge that “[s]ince most highly capable GPAI models are not currently developed domestically, Australia’s alignment with other international jurisdictions is important to reduce the compliance burden for both industry and government and enables pro-innovation regulatory settings.” (Proposal, pg 29).
Developers and deployers
The Government observes that both AI developers and deployers will need to adhere to the guardrails. The Government notes that responsibility for the guardrails should be assigned based on which parties are most capable of managing the risks at each development stage, considering factors like access to vital information such as training data and the capability to effectively intervene and modify an AI system.
For entities deploying AI from a supplier, it is worth noting that the Voluntary Standards (see below) include high level procurement advice to assist a deployer align with the closely aligned Voluntary Standards.
Proposed Mandatory Guardrails
The Proposal proposes ten guardrails requiring organisations who are developing or deploying high-risk AI (noting categories 1 and 2 above) to:
In addition to providing explanation for each guardrail, the Proposal states that:
Regulatory Options
The Proposal sets out three options for implementing the above mandatory guardrails:
The Proposal discusses the advantages and disadvantages of each option and invites public commentary on them.
Separate to these options, the Proposal also states the Government will “continue to strengthen and clarify existing laws so it is clearer how they apply to AI systems and models” (e.g. privacy, consumer protection, intellectual property, anti-discrimination, competition) (Proposal, pg 43).
Along with the Proposal, the Government also released the Voluntary Standards which replicate the mandatory guardrails, except for the 10th voluntary standard which, instead of focusing on conformity assessments (as was in the 10th mandatory guardrail), focuses on "stakeholder engagement” to “evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness”.
Further, the Voluntary Standards:
Together, the Proposal and Voluntary Standards signal the Government’s intention to provide regulatory clarity and certainty for those developing AI models and systems and to start to empower organisations to safely manage their use of this emerging technology
Although the mandatory guardrails are still under consultation, organisations should strongly consider adopting the Voluntary Standards now to give themselves a head-start in building their internal capability to responsibly manage innovation using AI, which is not going away. Organisations should also monitor updates to the Voluntary Standards as it is likely they will mirror any amendments made to the mandatory guardrails during the consultation process.
Even if the mandatory guardrails do not come into force, they broadly reflect existing international practices which organisations should be following anyway to ensure safe and responsible development and deployment of AI (alongside with robust data governance, privacy measures, and cybersecurity protocols). Adopting such practices may lend itself to building consumer trust and a competitive advantage in market.
See our dedicated page for the latest AI developments
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs