Follow us


AI has the potential to transform our lives and our economies. The rapid uptake of AI by businesses is creating enormous opportunities to improve productivity, increase efficiency and unlock value for clients and consumers. The benefits are already being seen through the power and convenience of ChatGPT and other large language models through to increasing efficiency and automation across a range of traditional industries, including financial services, manufacturing, healthcare and education.

However, as with any new technology, new risks are constantly emerging that will require careful assessment and continuous management. These risks cover the full scope of operational, commercial, and regulatory aspects of businesses and have the potential to create significant losses and liabilities. As it always has, insurance will continue to play a crucial role in mitigating the financial impacts of these exposures.

The emerging AI risk landscape

The Oxford Dictionary defines AI as “the capacity of computers or other machines to exhibit or simulate intelligent behaviour” and “software used to perform tasks or produce output previously thought to require human intelligence.” In essence, it is a form of technology that allows computers and machines to simulate traditional human intelligence.

At its logical extreme, AI may one day theoretically be able to learn and perform any task that previously required human intelligence to complete. AI technology is, perhaps thankfully, not yet at a stage of sophistication where it can simulate all, or even most, forms of human intelligence, but it is certainly taking great leaps forward at a rapid pace.

AI is already widely used in the economy and is being adopted by businesses at an astonishing pace. The pace of AI’s development has left governments and regulators rushing to keep up. The result is that, until relatively recently, there was very little, if any, specific regulation of AI in the major jurisdictions around the world, including in Australia.

The EU recently enacted a comprehensive AI regulation through the introduction of the Artificial Intelligence Act in August 2024 (EU AI Act). The EU AI Act classifies AI systems according to the following categories of risk:

  • unacceptable risk: AI systems which pose a clear threat to people are prohibited, such as cognitive behavioural manipulation of people or specific vulnerable groups (eg voice-activated toys that encourage dangerous behaviour in children) and social scoring systems (eg classifying people by reference to behaviour) and biometric identification of people, including facial recognition (except by law enforcement);
  • high-risk: AI systems that negatively affect safety or fundamental rights are considered high risk and subject to strict obligations before they can go to market;
  • limited risk: AI systems that make it possible to generate or manipulate images, sound, or videos (including ‘deepfakes’) are subject to transparency requirements so that individuals are aware when they have been used; and
  • minimal risk: AI systems that are already widely in use, including systems used in video games, spam filters and inventory management. These systems are not regulated, but a voluntary code has been suggested.

General purpose AI models (such as ChatGPT) are subject to transparency requirements and must provide technical documentation, instructions for use, comply with the EU’s Copyright Directive, and publish a summary about the content used for training. If a general purpose AI model exceeds certain computing power thresholds specified by the EU, the provider of the model must notify the EU and demonstrate that the model does not present systemic risks.

In 2024, the Australian Government conducted a consultation process on a “Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings” which set out three proposed regulatory approaches to address the risks of AI:

  • adopting AI guardrails within existing regulatory frameworks as needed;
  • introducing new framework legislation to adapt existing regulatory frameworks; or
  • introducing a new cross-economy AI-specific law (for example, an Australian AI Act).

The consultation completed in October 2024 and the government is presently considering the responses to inform next steps on AI regulation.

The government has in the meantime released a new Voluntary AI Safety Standard for use by Australian businesses. In step with similar actions in other jurisdictions – including the EU, Japan, Singapore, the US – the intention is that the standard will be updated over time to conform with changes in best practice.

On 29 October 2024, ASIC also released a report entitled “Beware the gap: Governance arrangements in the face of AI innovation” urging financial services and credit licensees to ensure their governance practices keep pace with their accelerating adoption of AI. AI risk is therefore likely to be a key focus area for regulators over the short to medium term.

The reason why new regulation has come about is that AI can be both unreliable and even potentially harmful if it is not properly supervised. For example, AI is well known to be susceptible to several limitations and issues including:

  • Model drift: When an AI model's performance degrades over time due to changes in data distribution, the operating environment, or even the model's goals or objectives, requiring monitoring and updating or retraining the model;
  • Hallucinations: AI models can generate false or inaccurate information in response to human prompts. For example, Air Canada was recently ordered to pay damages after its chatbot gave incorrect information about bereavement fares, leading to a customer’s financial loss. ChatGPT was also involved in a legal case where a lawyer submitted fabricated court cases to support a legal brief;
  • Discrimination: When an AI model makes connections between data based on algorithms, this may result in bias/ discrimination against individuals or groups;
  • Garbage in/garbage out: When an AI system is trained on poor quality input data, the output will inevitably be of equivalently poor quality. It is inherently susceptible to human error at the data input stage and in all aspects of its subsequent use; and
  • Cyber crime: AI is also susceptible to exploitation by malicious actors. It has been used to increase the sophistication of phishing attacks making phishing emails more believable by improving their wording and grammar. AI has also been used to generate 'deepfakes', audio or video content which mimics a genuine actor in order to extract personal and financial details.

These are just a selection of some emerging issues arising in the use of AI which highlight how things can go wrong, even where there is human involvement in the process.

AI can give rise to a range of liabilities for businesses. In this section, we discuss the main types of claims by third parties that businesses may face arising from the use of AI:

  • IP Infringement: central to almost any AI system is a large mass of data on which the system is trained. Although referred to as “data”, the training materials are frequently themselves original works, in which copyright can subsist. The use of GenAI systems therefore has the potential to result in copyright infringement by users. There is a complex statutory regime governing copyright in Australia, including defences of “fair dealing” for certain limited purposes, against which use of GenAI needs to be considered to understand infringement risk. Mis-use of confidential information through use of GenAI is also a relevant issue.

    Emma Iles, an HSF partner specialising in intellectual property disputes observes: “this is a new and complex area for Australian companies. While the potential risks of copyright infringement and mis-use of confidential information are real, there are a number of steps companies can take to reduce risk, including ensuring contractual arrangements with AI service providers are negotiated with protections in place in case things go wrong, implementing policies and training directed to minimising key risks and using ‘closed’ system AI products (to protect confidential information).”
  • Customer claims: the use of AI in the provision of services (whether directly or even potentially in the background) has the potential to result in liabilities to customers or other third parties if they suffer loss or damage as a result. This could result in liability for negligence or contractual breaches if services are not provided with reasonable care or products that use AI technology do not comply with capabilities represented.
  • Discrimination: the use of AI has the potential to result in unintended and unpredictable bias or profiling. Therefore, care needs to be taken when AI is used in facial recognition technology, the employment space or in determining eligibility for financial products such as mortgages.

  • Regulatory action: regulators such as ASIC and the ACCC are increasingly focused on taking action to avoid harm resulting from AI. Whilst new codes and AI-specific legislation are under consideration, there are a number of existing laws relating to misleading and deceptive conduct and directors’ duties that could be used by regulators to seek civil penalties for AI misconduct in appropriate cases. For example, ‘AI washing’ (ie where a company claims to possess a level of service relating to its use of AI which it does not in fact have) may be one of the new frontiers of regulatory action (like recent cases of greenwashing in climate disclosure cases).

    Christine Wong, a partner at HSF specialising in regulatory investigations and enforcement, observes that: “statements about the use of AI can potentially be misleading in at least two ways: by falsely asserting capabilities that the relevant AI does not in fact have and/or by omitting to disclose key risks associated with the use of AI and how data about individuals is being used. Representations about AI can be complex, particularly where this is an area undergoing rapid technological advancement and there is currently a lack of standardised language and definitions concerning the use of AI.”

    Tania Gray, an HSF partner specialising in regulatory disputes also notes: “there are a multitude of existing causes of action that might apply to bad outcomes arising from the use of AI. Regulators will consider the range of tools in their arsenal to pursue action for conduct they consider worth pursuing.”

  • Class Actions: Australian class action regime continues to be an avenue that plaintiff law firms and funders use to bring proceedings against both listed and unlisted companies involving allegations of loss to shareholders and consumers. To the extent a company is facing a regulatory investigation or suffers a substantial share price decline following the disclosure of adverse information to the market arising from an AI event, the possibility of a class action continues to loom large.

    Melissa Gladstone, a partner at HSF specialising in class actions observes “class action risk remains a key issue in corporate Australia. The rise of AI, which is rapidly advancing ahead of the pace of regulation, presents yet another category of emerging risk for boards and directors to grapple with.”

Whilst third party liability risk presents some of the most obvious and significant potential liabilities facing companies that use AI, there are also a number of ‘first party’ issues that can potentially occur causing loss to companies directly. For example:

  • System damage: AI becomes increasingly embedded in IT systems, it is possible that IT malfunctions or programming errors could result in system outages that cause system damage and business interruption loss. The recent CrowdStrike outage that caused global system failures is an example of the type of issue that can arise and also highlights the risks in relying on critical third party service providers. A logistics company which relies on AI to optimise its delivery routes and storage capacities could also suffer first party loss if the system does not perform.
  • Crime: as noted above, threat actors are increasingly using AI to improve the sophistication of phishing scams and enhancing their ability to breach data security protections. The rise of AI arguably increases the risk of theft of company assets (both financial and data). 
  • Physical damage: given the extent of automation powered by AI already being used today, it is increasingly more likely that AI may cause physical damage. For example, by way of autonomous vehicles colliding with other property or even by way of an AI-powered thermostat malfunctioning and causing a fire in a factory.

The role of insurance in mitigating AI risk

Insurance has a key role to play in helping companies mitigate financial impacts of the rising risk by spreading losses across the global insurance market. There are a number of policies, both traditional and emerging, that may respond to AI-related losses.

Affirmative AI cover

We are starting to see the early development of new, affirmative AI policies.

  • an Insurer recently launched a product for users of AI, which is said to cover losses where an AI model doesn’t deliver. So, if a bank replaced property valuers used for loan assessments with an AI model, and the AI makes a mistake that a human valuer would not have made, the policy may engage;
  • a new start up insurer, an Insurer, has launched a product providing a product warranty that AI models will work the way their sellers promise; and
  • press reports in March 2024 announced that Coalition, a cyber insurer, has added an AI endorsement to their cyber insurance policies.

While currently nascent, the affirmative cover market is likely to expand, presenting opportunities for businesses that understand their risks and how these products might be suitable for them.

Traditional policies and ‘silent’ AI cover

Even where AI risks are not affirmatively covered, they may be 'silently' covered under traditional policy lines that do not exclude them.

Businesses should review their insurance arrangements to consider coverage for potential harm from AI use or misuse. Key policy lines to consider include:

  • Professional indemnity ("PI") insurance will be relevant in the event of customer claims for services-related liabilities. Exposures may arise from AI related services or the use of AI in the provision of services or in the event of a regulatory action. Liabilities, defence costs, regulatory investigation costs and possible fines might be covered under PI policies.
  • D&O insurance (Side A/B) may protect directors or officers facing regulatory action for a failure to manage AI risks appropriately. Separately, Side C (if purchased) may be relevant in the event of the company facing a securities’ class action arising from the use (or alleged misuse) of AI.
  • Product liability insurance may cover compensation and costs if a consumer suffers damage from a product powered by AI, such as a smart home device.
  • Cyber policies may respond to security breaches or system failures and other privacy breaches relating to AI. They may also potentially respond to certain first party losses, including ransoms paid to malicious threat actors (a Crime Policy may also be relevant in the event of a ransom or conduct involving financial theft).
  • Employment Practices Liability insurance may cover an employer for compensation claims for discrimination or unfair treatment resulting from AI systems.
  • Property Damage and Business Interruption insurance will be relevant in the event that AI causes property damage and consequential business interruption.

Whether these policies may be engaged in the event of AI-related losses will of course depend on a number of factors. We foresee that the following issues may arise:

  • Blurred lines: liability policies typically require a causal link between any claim against the company (or insured person) and the perils insured by the policy (eg acts/omissions in the performance of professional services in PI policies or wrongful acts by persons in their capacity as directors/officers in a D&O context). In this new frontier of autonomous AI decision making, it is possible that real questions may emerge as to which party has committed an act/error and whose conduct has given rise to the claim, and therefore which policy is the responsive one: are the acts attributable to humans or the AI itself? This may sound like science fiction, but policyholders should think carefully about how they are using AI and how any liabilities associated with AI translate to their policies.
  • AI exclusions: whilst we have not seen specific AI exclusions appear in any traditional policy lines as yet, as these new risks develop, insurers will likely take a view on whether to price in or exclude AI risk. As we have already seen with cyber exclusions, policyholders should keep a close eye on AI exclusions being adopted in the market (either as standalone clauses or within existing cyber exclusions).

Looking ahead

The future of AI presents significant opportunities and notable risks. On the opportunity side, AI has the potential to revolutionise industries by enhancing productivity, accelerating scientific research and improving decision-making processes. 

However, these advancements come with risks. AI systems can breach copyright, discriminate against people and give rise to multifarious risks of harm to the public which government and regulators are becoming increasingly focused on. 

As always, the role of human oversight and prudent risk mitigation remain front and centre in this new world of AI. The insurance market also has a critical role to play in this area and we will be watching closely to ensure corporate policyholders remain protected.
 


Australia Policyholder Insurance Highlights – 10th Edition

Our assessment of the key lessons for insurance policyholders based on last year’s top cases and market developments

Key contacts

Anne Hoffmann photo

Anne Hoffmann

Partner, Sydney

Anne Hoffmann
Ruth Overington photo

Ruth Overington

Partner, Melbourne

Ruth Overington
Guy Narburgh photo

Guy Narburgh

Special Counsel, Sydney

Guy Narburgh

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Sydney Australia Perth Brisbane Melbourne Class Actions Insurance Disputes Employment, Pensions and Incentives Intellectual Property Cyber Risk Advisory Financial Institutions Mining Technology, Media and Telecommunications Financial Buyers Manufacturing and Industrials Government and Public Sector Professional Support and Business Services Energy Real Estate Infrastructure Consumer Emerging Technologies AI and Emerging Technologies Anne Hoffmann Ruth Overington Guy Narburgh