Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
The Australian Government is calling for industry consultation to inform the appropriate regulatory and policy responses to mitigate the potential risks of AI and support safe and responsible AI practices in Australia.
On 1 June 2023, the Australian Government released a discussion paper titled Safe and Responsible AI in Australia (available here). The Australian Government recognises the potential for Australia to become a global leader in responsible AI and is seeking submissions on whether further governance mechanisms (including regulatory and voluntary mechanisms) are required to mitigate AI risks and increase public trust and confidence in its use.
The discussion paper builds upon and was released concurrently with the National Science and Technology Council’s Rapid Research Report on Generative AI and is open for consultation until 26 July 2023.
Hon. Ed Husic, Minister for Industry and Science stated, ‘using AI safely and responsibly is a balancing act the whole world is grappling with at the moment…there needs to be appropriate safeguards to ensure the safe and responsible use of AI’. Given the potential risks of AI, it is appropriate Australia consider whether further regulation is required. The paper notes Australia’s governance framework should harmonise with that of its major trading partners to help bolster its economy. Governance measures should aim to ensure appropriate safeguards are in place and provide businesses confidence when investing in AI technologies.
As well as seeking industry consultation, the discussion paper outlines:
The high-level discussion paper proposes a risk management approach, that draws heavily from the European Unions proposed AI Act, and the Canadian Directive of Automated Decision-Making. This risk management approach involves an organisation contemplating the risk level of the AI application being considered. The higher the risk level, the more onerous risk management requirements apply. The paper asserts this approach best caters to context-specific risks, allowing for less onerous obligations when appropriate and allows AI to be used in high-risk settings when justified.
While there are no real surprises in the discussion paper, the level of activity and attention being given to AI both domestically and globally, points to the likelihood of a serious appetite for reform within Australia.
This consultation takes place against a backdrop of recent announcements by other jurisdictions grappling with the same challenge in differing ways. Some jurisdictions continue to rely on voluntary self-regulation and frameworks, while others are pushing for more targeted risk based regulations. As summarised in the paper, “[s]ome countries like Singapore favour voluntary approaches to promote responsible AI governance. Others like the EU and Canada are pursuing regulatory approaches with proposed new AI laws. The US has so far relied on voluntary approaches and is consulting on how to ensure AI systems work as claimed, and the UK has released principles for regulators supported by system-wide coordination functions. G7 countries in May 2023 agreed to prioritise collaborations on AI governance, emphasising the importance of forward-looking, risk-based approaches to AI development and deployment.”1
Despite the varying approaches, a common theme from public statements of senior officials across the globe is the cross border application of AI as an emerging technology requires international convergence on governance approaches to lay appropriate guardrails and foster innovation that will unlock associated economic benefits.
The G7 countries announced the intention to develop guardrails on AI. In April this year, Ministers for digital and technology issues met in Japan and agreed broad recommendations for AI, ahead of the May G7 summit. In their communique, they reaffirmed that “AI policies and regulations should be human centric and based on nine democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data”.2
The EU is pressing ahead with the proposed European law on artificial intelligence (the AI Act) which would be the first general law on AI by a major regulator anywhere. As a result, the AI Act is expected to exhibit the ‘Brussels effect’ experienced with the General Data Protection Regulation (GDPR) introduced in 2018. The GDPR harmonised data privacy laws across Europe and went on to set the international benchmark for data privacy for international businesses.
The approach taken in the AI Act recognises that it is impractical to regulate the technology itself so focuses on regulating the use of the technology where AI applications and systems present a high or unacceptable risk. The AI Act adopts a risk-based approach imposing obligations on providers, developers and users based on the level of risk the AI system can generate.
“AI systems with an unacceptable level of risk to people’s safety and intrusive and discriminatory uses would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).”3
High-risk applications including harm to people’s health, safety, fundamental rights or the environment are subject to specific legal requirements (for example influencing voters in political campaigns or CV-scanning tool that ranks job applicants). Applications that are neither explicitly banned nor classified as high-risk are largely left unregulated.
For the significant number of applications of AI systems that are neither banned as unacceptable, nor regulated as high-risk, they will likely fall outside the regulatory permitter of the regime.
The AI Act sensibly recognises that compliance with legal principles is often challenging in the case of rapidly evolving technology. As a result, the AI Act will lean heavily on the role of international technical standards to evidence compliance. To support industry, the AI Act seeks to impose the burden of evaluation on the regulator not the individual company.
The current hype around generative AI has resulted in updates to the draft bill since its first iteration. Generative foundation models (the language model engines underpinning chat bots like ChatGPT, Microsoft Bing, and Google Bard) will be required to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database. Obligations in respect of transparency will require disclosure that the content was generated by AI, design features to prevent the model form generating illegal content and transparency with regards to training data which is subject to copyright4.
After extended period of consultation and debate, earlier this month the Internal Market Committee and the Civil Liberties Committee of the European Parliament adopted a draft negotiating mandate for the text by resounding majority. If Parliament votes to accept the text at the upcoming plenary vote scheduled for mid-June, this will provide the mandate for subsequent trilogue negotiations with the Council and the Commission. The proposal of the EU AI Act will become law once both the Council representing the 27 EU Member States and the European Parliament agree on a final version of the text, expected by the end of 2023/beginning of 2024.5
The U.S. has a well-established tech industry and motivation to promote investment and economic activity in AI. Historically the U.S. has relied on voluntary standards, self-regulation and the patchwork of existing laws and regulations that apply to AI systems. High profile Senate hearing in mid-May signalled a potential shift in approach towards considering domestic AI regulation with lawmakers questioning C.E.O. of OpenAI, Sam Altman on how AI should be regulated6. While the U.S. released the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the Blueprint for an AI Bill of Rights, there is no genuine indication yet of a unified federal law on AI. The U.S. is however, playing a key role in the harmonisation and convergence of international standards, an vital part of the international AI governance response. At the fourth ministerial meeting of the EU-US Trade and Technology Council on 12 May 2023, representatives resolved to strengthen translantic co-operation on emerging technologies (including AI).
China was one of the first countries to introduce specific legislation directed as particular use cases of AI models and systems including recommendation algorithms and deep synthesis technology (responsible for deep fakes). We expect China to continue to adopt this targeted approach.
State of play in AustraliaCurrently, there is no AI-specific legislation in place in Australia. The current consultation process, follows attempts by the previous government with the Department of the Prime Minister and Cabinet consultation in March 2022. At that time the Digital Technology Taskforce was exploring how regulatory settings and systems could maximise opportunities facilitate responsible use of AI and automated decision making. The intended discussion paper identifying possible reforms and action was never published. Until any AI specific law reform is introduced in Australia (if at all), businesses will need to continue to navigate the legal frameworks enforced by various regulators for general laws (e.g. data protection and privacy, Australian Consumer Law, discrimination law, copyright law etc.), sector-specific laws (e.g. motor vehicle, therapeutic goods etc.) as well as current law reform processes underway that may impact AI applications and their uses such as the Privacy Act Review Report. In addition to legal requirements, companies should consider voluntary frameworks and guidance for self-regulation. Such initiatives include:
|
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs