Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
Artificial Intelligence (AI) is rapidly transforming the business landscape, with significant growth projected in the coming years. This presents both opportunities for Australian businesses alongside potential regulatory and dispute risks.
The pace of technology development has left governments and regulators rushing to keep up. So, while technology regulation continues to develop to respond in a more focused way to threats surfacing from the use of technology and AI, we can expect regulators and plaintiff firms to deploy the range of tools already in their arsenal, including bread and butter claims like misleading and deceptive conduct, to pursue action for any perceived bad consumer or shareholder outcomes arising from the use of AI. One particular species of regulatory and class action risk that we see on the near horizon is claims for AI washing. Exaggerating AI capabilities to attract investment and market interest can lead to liability for misleading conduct under existing laws.
Beyond regulatory and class action risk, we also see breach of confidentiality through use of AI products as another area of risk that should be closely monitored. You can also explore more on the risks of disputes that can arise in relation to copyright and patent infringement in our IP in AI series.
These cases will raise new challenges - if the AI truly is a black box, how will the Court determine what the AI did and why, and whether anyone within a corporation was or could have been aware of the problem that was about to arise?
AI encompasses a range of technologies capable of performing tasks that typically require human intelligence. According to a CSIRO report from May 2023, 68% of Australian businesses have already implemented AI technologies, with another 23% planning to do so within the next year.
AI technologies can be categorised based on their level of human intervention:
What AI systems have in common is that they will all be fed 'input data' or instructions, and for augmented and autonomous AI, the system will 'learn' by itself from the responses it receives, without being expressly programmed to do so. And this raises what is commonly referred to as the black box problem. This is where a machine learning model might deliver an output without revealing all of the inputs, given the complex, high-dimensional patterns used in the model.
In short, yes.
An example is the Robodebt Scheme. An automated debt recovery system miscalculated debts, leading to a class action that resulted in significant compensation for affected individuals, with the causes of action that formed the basis of the plaintiff’s claim being negligence and unjust enrichment.
If AI is used for automated decision-making and an alleged or perceived negative outcome occurs, even under the current legal framework, regulatory intervention is possible and, if consumer harm is involved, a class action may follow.
Class action risk might also arise in relation to what is known as AI washing.
AI washing refers to the practice of exaggerating AI capabilities to attract investment and market interest. It is not dissimilar to recent cases of greenwashing in climate disclosure cases.
This can lead to potential liability for misleading conduct under existing technology-neutral laws, such as the ASIC Act, Corporations Act, and Australian Consumer Law. The black box issue exacerbates these concerns.
We have already seen an uptick in regulatory and class action activity globally regarding false AI claims. The U.S. Securities and Exchange Commission (SEC) has taken the lead, conducting a market sweep in 2023 and 2024 to review investment advisers' use of AI and penalising three firms for misleading AI statements. The SEC plans to increase scrutiny of financial firms’ use of AI in 2025, with this year on track for the largest number of new filings related to such conduct.
In Australia, ASIC's chair, Joe Longo, has confirmed that AI washing is a serious emerging issue, and all Australian companies and their directors should be on notice. Recently, ASIC commenced criminal proceedings against the CEO of Metigy, which falsely claimed to be an AI marketing company.
Beyond AI washing, the use of AI systems may lead to other misleading statements risks. The outputs of AI systems can generate incorrect outcomes, leading to risks of misleading or deceptive conduct. For example, in Canada, an airline chatbot inaccurately explained the airline's bereavement travel policy to a grieving passenger. Air Canada tried to argue that it was not liable for information provided by its chatbot, but a tribunal found in favour of the passenger.
To mitigate these risks, organisations should:
Some AI tools, including content generators like ChatGPT, invite users to input information, documents and prompts into the tool. This can create risks relating to use of third party confidential information and maintaining the quality of confidence in information of users and their organisations.
If users input confidential information of a third party into these AI tools, this could lead to an actionable breach of existing obligations of confidentiality owed to third parties.
Practically, AI providers may also take a copy of information inputted by users, which carries the potential for that information to be leaked or disclosed to others without any knowledge or control of the user. ChatGPT's standard configuration, for example, retains all conversations. If users input confidential information of their organisation into an AI tool that retains inputs, there is a risk that control of the confidential information could be lost. This could cause the confidential information to lose its quality of confidence and therefore its value. A well-known example of this involved Samsung employees using ChatGPT and inputting valuable source code relating to microchips.
Another significant concern regarding use of confidential information with AI is the potential for sensitive intelligence and valuable intellectual property to become available to threat actors.
To mitigate the risks of misuse of confidential information it will be important to:
The Australian Federal Government has acknowledged that existing laws do not sufficiently protect against AI-related harms, prompting a comprehensive review of regulations governing AI. Currently, there is no specific law regulating AI use, and while various voluntary ethical principles exist, recent consultations aim to address this gap. The focus will likely be on high-risk AI applications, with multiple regulators involved in overseeing these reforms.
As regulatory measures increase, so too is the expectation of heightened litigation risks, particularly in regulatory enforcement. This trend mirrors the rise in cyber class actions, suggesting that as regulatory scrutiny intensifies, private litigation risks will also grow. Key areas of reform include mandatory guidelines for high-risk AI, a review of Australian Consumer Law concerning AI-enabled products, and proposed changes to the Privacy Act, which may require disclosure of automated decision-making processes. While the specifics of these reforms are still developing, they are anticipated to create significant regulatory and litigation challenges.
While currently in its early days, the evolving regulatory regime is likely to lead to heightened litigation risks. Particularly if individuals are allowed to take action under new Privacy Act reforms, we may see a consequent uptick in class action risks. It’s essential to adopt a "watch this space" approach moving forward.
Given the expected heightened risks, the question arises – how do organisations insure against those risks?
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs