Follow us

Over a year has passed since OpenAI launched ChatGPT, propelling generative AI into the limelight. A recent Forbes study revealed that nearly all business owners believe ChatGPT will benefit their operations, with more than half already leveraging AI across a wide range of business activities. The use cases for AI are many and varied. To name just a few: banks are using AI to assess mortgage applications; retailers are using AI to manage stocks; professional services firms are using AI to assist with document-heavy processes; and hospitals are using AI to analyse radiology results.

AI has transformational potential. However, as with any powerful new technology, there are risks which must be carefully assessed and managed. Insurance will play a crucial role in risk management and businesses are well-advised to identify pro-actively their AI risks and consider whether their existing insurance arrangements are suitable for the emerging risks. 

This mini-series on our insurance blog will give you a framework to kick-start that process.

In this first post of the series, we will look at:

  1. How AI is used
  2. The risks of AI
  3. The regulatory landscape
  4. Potential cover under your insurance programme

What is AI?

AI refers to computer systems that can perform tasks typically requiring human intelligence.

Article 3 of the EU AI Act 2024 defines an AI system as: "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".

A key aspect of this is that AI systems are fed 'input data' (i.e., training data and instructions) and can then 'learn' and adapt based on their output data and responses without further human programming.

As the EU AI Act definition makes clear, different types of AI require varying degrees of human intervention. How are you using AI? Is it:

  • Assisted: AI which supports administrative functions but does not make decisions, e.g., AI used in smart TV applications to tailor programme suggestions based on previous engagement.
  • Augmented: AI which provides support to human decision-makers by generating content but stops short of making autonomous decisions, e.g., content generators such as OpenAI’s ChatGPT or Sora (a video generator).
  • Autonomous: AI which is adaptive and deployed to make its own decisions, e.g., by determining credit scores for customers.

 

1. How is AI used within your business?

Before we turn to look at the implications of using AI on potential insurance coverage, it is necessary to understand how AI is used within your business. Once that is understood, the risks associated with engaging with AI can be identified, which in turn informs what insurance cover may be responsive. The risks will vary considerably depending on whether your business is an AI developer, deployer or end user. 

Who's who in the world of AI?

The UK government's Initial Guidance for Regulators on Implementing the UK's AI Regulatory Principles published in February 2024 adopts the following terminology:

  • AI developers: Organisations or individuals who design, build, train, adapt or combine AI models and applications.
  • AI deployers: Any individual or organisation that supplies or uses an AI application to provide a product or service to an end user.
  • AI end user: Any intended or actual individual or organisation that uses or consumes an AI based product or service as it is deployed.

The first step for a business in understanding how its insurance arrangements might respond to AI is to map out how the organisation uses AI, what functions AI performs, what data it is using, the level of autonomy of the AI tools, where there is a human in the loop, whether there is a third-party supplier of the AI and what touchpoints the AI has with customers or other end users.

This will be different for every business and every sector. By way of example, within the insurance industry itself, there are a range of ways in which insurers are using AI or might start to use AI. The AI Guide published by the Association of British Insurers in February this year sets out how the ABI envisages AI being used by insurers. Examples include:

  • personalised pricing, where insurers can use AI to offer personalised insurance products tailored to a customer's need and/or risk profile;
  • claims management, where insurers can use AI to manage and expedite claims processing, particularly in relation to small or retail claims; and
  • risk analytics, where insurers can use AI can evaluate risks such as climate change-related risks.

 

2. The risks of AI

AI is susceptible to several known technical issues. These include:

  • Model drift: this is when an AI model's performance degrades over time due to changes in data distribution, the operating environment, or even the model's goals or objectives, requiring monitoring and updating or retraining the model.
  • Hallucinations: this when an AI model generates false or inaccurate information, such as the lawyer in New York who inadvertently cited six cases 'made up' by ChatGPT.
  • Discrimination: AI looks for patterns across datasets which means that profiling and bias can arise in an unintended and unpredictable way. AI may make connections between data that are otherwise invisible to humans, which may result in bias against individuals or groups. Amazon's AI recruitment tool, for example, was abandoned in October 2018 when, after having processed data on successful applicants over a 10-year period, it started to penalise CVs including the word "women". Bias can also be introduced if the underlying code is biased (consciously or unconsciously) or the training data is biased due to previous human input or interaction.
  • Garbage in / garbage out: the quality of an AI system's output is only as good as its input data. So if an AI system is trained on poor quality input data, the output will be of equivalently poor quality. As such, businesses need to ensure that AI systems are trained carefully, monitored and tested appropriately on an ongoing basis. These technical issues give rise to a risk of poor quality, incorrect or discriminatory outcomes being generated by AI models. If these outcomes were to feed directly into employment decisions, customer services or products, it is possible to see how that could give rise to third party liability.  

Beyond these technical risks, like any other technology, AI is also susceptible to human error and misuse or maluse. One theory posited by David Autor (a professor of economics at MIT)1, is that over the longer term, AI has the potential to enable workers who have more basic levels of training to carry out work that would otherwise be done by highly-skilled workers or professionals. If he is right, employment costs could be reduced but it could also mean that tasks previously done by highly-skilled or experienced workers come to be carried out by less experienced workers. That could give rise to an increase in the risk of errors or omissions if proper governance is not put in place.

Likewise, AI is susceptible to use by malicious actors. It has been used to increase the sophistication of phishing attacks making phishing emails more believable by improving the wording and grammar. AI has also been used to generate 'deepfakes', audio or video content which mimics a genuine actor in order to extract personal and financial details. Advertising group WPP was recently targeted by a deepfake scam which impersonated the voice of its CEO, Mark Read, using publicly available YouTube footage.

There are, of course, also a range of legal and regulatory risks associated with the use of AI such as:

  • Data protection and privacy: When using AI to process or create outputs involving personal data, businesses must ensure compliance with applicable data privacy legislation. Training generative AI systems usually requires large amounts of data. If not properly secured, this could lead to potential breaches of sensitive information.
  • Human rights and discrimination: The use of AI can result in unintended and unpredictable bias or profiling. As such, care needs to be taken when AI is used in facial recognition technology, the employment space or in determining eligibility, e.g., for financial products such as mortgages.
  • Director or senior manager liability: those in the company with (ultimate) responsibility for AI strategy, risk appetite and decision-making may face liability to the company or shareholders if shareholder value is eroded in a way connected with the approach taken within their sphere of responsibility.  
  • Services liability: the use of AI in the provision of services (whether directly or even possibly in the background) has the potential to result in liabilities to customers or other third parties if they suffer loss or damage as a result.
  • Securities liability: companies will need to ensure representations to the market about the business's use of AI are accurate, otherwise it (and possibly its directors) may face securities claims.
  • Product liability: AI developers will need to ensure they have complied with relevant safety and consumer standards when releasing AI-powered products. Legislation in this area is evolving, for example, with the Automated Vehicles Act 2024 recently enacted.
  • Intellectual property: Developers and deployers alike will need to ensure they comply with IP, copyright and trademark law when using AI. Stability AI, Microsoft and OpenAI have all been sued for IP or copyright infringement.
  • Contractual liability: As the law around liability evolves, parties may seek to assign liability between themselves under the terms of a contract. The case of Tyndaris v MMWWVWM Limited illustrates how a claim for breach of contract may arise. Tyndaris agreed to manage an investment account for VWM using an autonomous AI-powered system to make investment decisions. The use of AI was expressly agreed between the parties; the intention was to take the human emotion out of the investment decisions. However, the AI algorithm made investment decisions which led to major trading losses. Tyndaris sued VWM for $3 million of unpaid fees and VWM counterclaimed to recover its trading losses of around $20 million, relying on alleged misrepresentations by Tyndaris regarding the capabilities of the AI. The matter settled so liability was not determined by the court. However, the case illustrates how a breach of contract dispute and potential liability might arise.

How these risks manifest will depend how your business uses and interacts with AI. For example, if there is a technical failure of an AI tool, an AI supplier might face product liability claims from an AI enterprise user who has purchased the tool. The AI enterprise user may face claims from customers if the technical failure gives rise to deficiencies in the provision of customer services.

1. Speaking on Bloomberg's Odd Lots podcast, which you can listen to or read here.

 

 

3. The regulatory landscape

On 21 May 2024, the EU Council approved the Artificial Intelligence Act. This is the first comprehensive legal framework that has been adopted for AI. It classifies AI systems into four risk-based categories, with 'unacceptable risk' uses prohibited and 'high risk' systems subject to strict obligations. Most obligations apply to developers of high-risk AI systems, but some apply to 'users'. These rules, which carry penalties up to 7% of global annual turnover for breaches, apply to any company operating in the EU. Member States are now working on implementing the AI Act.

By contrast, the UK has taken a lighter touch approach through a sector-led model in which existing regulators will be responsible for regulating the use of AI in their respective sectors in line with five common principles set out in a government White Paper in March 2023: A pro-innovation approach to AI regulation. The principles are:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Sector-specific regulators will be supported by a central function that is being established by The Department for Science, Innovation and Technology ("DSIT"), which will provide expert input. DSIT recently issued initial guidance for regulators on Implementing the UK's AI Regulatory Principles. Further detailed guidance is set to be issued by DSIT by the summer. However, it remains to be seen whether the UK government's approach will change following the upcoming UK elections.

Looking further afield, the global landscape is complex and varied. Policymakers in some jurisdictions, like India, are still determining their approach. A key challenge – and risk – for businesses adopting AI, will be to ensure compliance across all jurisdictions in which they operate, particularly where those regulations are not harmonised.

 

4. Potential cover under your insurance programme

For businesses looking to manage these risks, a key question will be the extent to which these risks are insurable or, indeed, are already insured under your existing insurance programme.

Affirmative AI cover

We are starting to see the early development of new, affirmative AI policies.

  • Munich Re recently launched a product for users of AI, which covers losses where an AI model doesn’t deliver. So, if a bank replaced property valuers used for loan assessments with an AI model, and the AI makes a mistake that a human valuer would not have made, the policy would engage.
  • A new start up insurer, Armilla Insurance, has launched a product providing a product warranty that AI models will work the way their sellers promise.
  • Press reports in March this year announced that Coalition, a cyber insurer, has added an AI endorsement to their cyber insurance policies.

While currently nascent, the affirmative cover market is likely to expand, presenting opportunities for businesses that understand their risks and how these products might be suitable for them.

Silent AI cover

Even where AI risks are not affirmatively covered, they may be 'silently' covered under traditional policy lines that don’t exclude them. We have not yet seen AI exclusions appear in traditional policy lines. As the risk magnifies over time, insurers will have to take a view on whether to price in or exclude that risk. The approach taken may vary between policy lines. For example, D&O insurance may be less likely to have exclusions introduced over time given its important in protecting the personal exposure and assets of directors. But in the meantime, AI risks may manifest in financial loss to the business that is covered under traditional policy lines (e.g., through property damage or crime or regulatory action or third-party compensation claims).

Businesses should review their insurance arrangements to consider coverage for potential harm from AI use or misuse. Key policy lines to consider include:

  • Civil liability or professional indemnity ("PI") insurance will be relevant in the event of customer claims for services-related liabilities. Exposures may arise from AI related services or the use of AI in the provision of services or in the event of a regulatory action (whether due to breach of any new AI regulation or existing regulation such as data protection law). Liabilities, defence costs, regulatory investigation costs and possible fines might be covered under various policies, including PI policies.
  • D&O insurance may protect directors or officers securities or other liability. This includes exposure that directors and officers may face in connection with the use of AI in decision-making processes – whether their own or those within their sphere of responsibility. This can result in claims for breaches of duty or mismanagement arising from those decisions. Typically, that would arise in the form of shareholder claims – whether derivatively or directly – if the company loses value. For example, it may be alleged that the Board has been remiss in its management of AI, whether by losing competitive advantage by failing to invest, or going too far and investing without sufficient diligence.  You could also have shareholder claims if the company misrepresents its use of AI publicly, such as so-called AI-washing claims. There may also be D&O risk in relation to senior managers' responsibility for others using AI within the business, in circumstances where responsibility ultimately falls on the board or senior managers.
  • Product liability insurance may cover compensation and costs if a consumer suffers damage from a product powered by AI, such as a smart home device.
  • Cyber policies may respond to security breaches or system failures and other privacy breaches relating to AI. This is a key risk given that AI models often run on very significant volumes of data, including sensitive or important data. Cyber policies may also provide cover for regulatory investigation costs and fines to the extent insurable at law. An example would be an investigation in the event of a major AI related data breach, resulting in claims by customers as well as an action by the ICO. In the event of a ransom demand by a malicious actor using AI, a Cyber or Crime policy may respond.
  • Employment Practices Liability insurance may cover an employer for compensation claims for discrimination or unfair treatment resulting from AI systems (and where the claim is made against a director, it may be covered within D&O insurance).
  • Property Damage and Business Interruption insurance will be relevant in the event that AI causes property damage and consequential business interruption. An example would be if an AI-powered thermostat malfunctions and causes a fire in a factory.  

Coming next… Part 2: Practical points to consider at your next renewal

In the second post in the series we will look at some practical steps you can take at your next renewal to maximise the potential cover for AI risks under your existing insurance programme.

Key contacts

Greig Anderson photo

Greig Anderson

Partner, London

Greig Anderson
Antonia Pegden photo

Antonia Pegden

Partner, London

Antonia Pegden
Rachelle Waxman Sacks photo

Rachelle Waxman Sacks

Senior Associate, London

Rachelle Waxman Sacks
Katie Collins photo

Katie Collins

Associate, London

Katie Collins
Greig Anderson Antonia Pegden Rachelle Waxman Sacks Katie Collins