Global Bank Review 2024
Adaptation: Change is the only constant
How should AI be regulated in financial services? Regulators are torn between the allure of AI and the undoubted risks. Firms are caught in the middle, with often inconsistent approaches being taken in different jurisdictions. No wonder over half of the attendees at our London Financial Services Regulatory Conference in October 2024 listed AI as their top priority for 2025 and beyond.
In the UK, the regulation of AI within financial services sits within a wider debate about the role of the financial regulators in supporting growth.
Randall Kroszner, a member of the Bank of England's Financial Policy Committee, has emphasised the UK's need to embrace innovation in the context of AI for productivity growth, while maintaining financial stability. This sits within the UK's broader context of a pro-innovation, outcomes focused sectoral regulation of AI. Indeed, the UK government's AI Opportunities Action Plan recognises AI as the "government's single biggest lever" for "kickstarting broad-based economic growth". For financial services, one of the 50 action points includes appointing an AI Sector Champion to "work with industry and government and develop AI adoption plans". This builds on two other regulatory initiatives in the UK that are focused on enabling innovation in AI. The first is the government's Regulatory Innovation Office, which was launched in October 2024. While much of the detail remains to be seen, the intention behind the body is clear: removing barriers to innovation and encouraging growth. The second is the Digital Regulation Cooperation Forum, which brings together the FCA with the UK's competition, communications and information rights regulators to encourage collaboration between them. It has launched an AI and Digital Hub, which will support those working on AI or digital products by offering informal advice on questions which span the remits of at least two of the member regulators.
Charlotte Henry
Partner, Sydney
Firms continue to seek to innovate and deploy AI. The FCA and the PRA's Machine Learning and AI survey shows 75% of the surveyed firms were already using some form of AI in their operations, including all of the large UK and international banks, insurers and asset managers that responded. In terms of uses, some of the initial uses appear to be relatively lower risk, including using AI to optimise internal processes (41% of respondents) and enhancing customer support (26%). Firms are also using AI in fraud prevention (33%) and money laundering prevention (20%). Generative AI (GenAI) is also touted as a potential game changer with use cases in customer services/chatbots, marketing (especially image generation), surveillance, support of trading desk pricing models and HR recruitment.
We anticipate these anti-financial crime use cases will increase not least in response to recent updates to the FCA's Financial Crime Guide which include as good practice exploring new approaches to transaction monitoring such as machine learning. We expect that, despite the risks, where AI helps enable outcomes sought by regulators, today's 'good practice' will be tomorrow's 'market practice' in the eyes of the regulator.
Firms will be tackling implementation of the EU AI Act over the course of the year, in particular in areas such as credit scoring and insurance pricing where the use of AI has been designated as high-risk. In this regard, the EU Commission is due to publish guidelines on the application of GenAI models in early 2025, which aim to offer clarity on the scope of the AI Act. Firms will also be considering their role in the use of AI systems, and how this impacts their obligations under the AI Act. For example, many may use an AI model developed by a third-party provider but fine-tune (or potentially significantly adjust) it with their own data for specific use cases. Developing an AI strategy, systems and controls, as well as governance framework, will be key. It is anticipated that the EU AI Office's General-Purpose AI Code of Practice, which is in the process of being drafted with input from stakeholders, should assist in the practical application of the AI Act, such as where deployers – such as financial institutions – 'fine-tune' general purpose models developed by third parties. Whether there will be pressure in the EU to develop sector-specific guidance for financial services is one to watch.
Jon Ford
Partner, London
In Australia, observing the approach of other jurisdictions, the government is trying to regulate AI from a number of angles. Like the EU model, it has criminalised some high risk use cases of AI (such as deep fakes used in sexually explicit material) and is seeking to introduce mandatory guardrails to regulate the development and deployment of AI to sit alongside the current patchwork of voluntary standards and AI Ethical Principles. Alongside these developments, new legislation has been proposed to introduce greater transparency for customers about when AI will be using their personal information and how. These measures sit alongside broader privacy related initiatives relating to businesses using commercially available AI products and developers using personal information to train AI models. Finally, the regulators are currently focussed on how regulated entities exercise governance and oversight over the AI models that are deployed throughout their businesses.
In a report in late 2024 – Beware of the Gap: Governance arrangements in the face of AI innovation – ASIC made it clear that it expects boards to understand what an AI model is supposed to do, and to be able to check that it is doing what it is supposed to do. Interestingly, ASIC's findings showed that regulated entities had to date focused their AI primarily on supporting human decision-making, and most use cases did not directly interact with consumers, however, around 60% of regulated entities intended to increase AI usage.
ASIC analysed information about over 600 AI use cases across 23 licensees in the banking, credit, insurance and financial advice cases, either in use or in development. Over half of the use cases were less than two years old or in development. ASIC found that many licensees relied heavily on third parties for their AI models, but that not all had appropriate governance in place. Critical focus areas for ASIC are with credit decisioning and debt collection.
In Hong Kong, there is a similar focus on capturing opportunities while encouraging regulated firms to adopt AI in a responsible manner. In October 2024, the government issued a policy statement on the responsible application of AI in the financial market, which stated its view that adopting a "dual-track" approach is the most ideal, ie, promoting AI adoption by the financial services sector while addressing the potential challenges such as cybersecurity, data privacy and the protection of intellectual property rights. Financial institutions are expected to formulate an AI governance strategy on how AI systems should be implemented and used, and should adopt a risk-based approach in the procurement, use and management of AI systems, with human oversight being crucial in mitigating the potential risks.
Marina Reason
Partner, London
Like some other jurisdictions, the position is largely that the potential risks posed by AI have been suitably covered in the relevant regulations and/or guidance issued by financial regulators, which are being continuously updated or supplemented as appropriate. The Securities and Futures Commission (SFC)'s circular of 12 November 2024 is a recent example. It sets out the SFC's expectations for licensed corporations offering services or functionality provided by generative AI language models or generative AI language model-based third party products in relation to their regulated activities.
Financial services regulators have continued to encourage the responsible use of AI (and technology more broadly) by the industry, including for anti-money laundering and countering the financing of terrorism (AML/CFT) purposes. They have also been surveying the industry to ascertain the extent of adoption and the implications. In August 2024, the Hong Kong Monetary Authority (HKMA) launched a generative AI sandbox to empower banks to pilot their novel use cases within a risk-managed framework, supported by technical assistance and targeted supervisory feedback. More recently, the HKMA published a research paper that explores the transformative potential of generative AI and its implications for the financial industry, particularly in terms of operational efficiency, risk management, and customer engagement. Meanwhile, the SFC published a report setting out its observations on the benefits and challenges of regulatory technology adoption, the common types of solutions used in AML/CFT processes, and the key principles in responsible adoption.
AI poses particular challenges for financial services regulation, and different regulators are engaging with those challenges in different ways. The Financial Stability Board (FSB) continues its efforts to coordinate regulatory responses, in particular in relation to third-party risk management and cross-border information sharing. The FSB has highlighted the deeper convergence of tech and finance ecosystems where non-financial firms will continue to dominate provision of critical AI infrastructure for systemic financial processes. However, the ability to handle such risks with AI-tailored international standards is hampered by the pace of technological change. Instead, the FSB anticipates regulators will use existing regulatory and supervisory frameworks, in particular in relation to operational risks, but with some AI-specific guidance where issues go outside the scope of existing regulations.
We expect 2025 to see a maturing of regulation of the use of AI, with a particular focus on vulnerabilities to critical third parties in the deployment of AI. This will likely involve a degree of coordination between regulators in relation to such operational risks, but continuing autonomy between jurisdictions in broader AI regulation. Competition between firms will continue to see the rise in the adoption of AI as they seek more efficient but personalised offerings to consumers, as well as using the power of AI to better meet regulatory objectives such as preventing financial crime. As more use cases become viable, firms will continue need to manage the associated risks, including the variety of regulatory requirements applicable in different jurisdictions.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs