Global Bank Review 2023
Beyond the hype – Will new laws win trust in banks’ AI tools?
AI is transforming the global economy. Firms' leaders and senior managers are grappling with how this will impact financial services. The opportunities are significant: generative AI is estimated to lead to increases in productivity of between 2.8% to 4.7% of banks' annual revenues (a US$200 – 340 billion increase)1. At the same time, customers and employees are concerned about how this opaque technology will impact them. As discussed in our Global Bank Review 2023, uncertainty continues on where laws and regulation of AI will ultimately land in different countries.
Firms cannot stand still. Competitive pressure is driving them to explore, test and deploy AI in more areas of their business. The need to act quickly is also leading to the use of third-party products to source AI capabilities and infrastructure, including AI-as-a-service and off-the-shelf AI models. Firms' senior managers must ensure the corresponding governance and risk management is robust. The technology may be cutting edge, but the risks are familiar and signposted in numerous publications by regulators.
Jon Ford
London
In a recent speech on AI, the Chair of the US Securities and Exchange Commission (SEC) remarked: "We at the SEC are technology neutral…we focus on the outcomes, rather than the tool itself." Indeed, the message from the UK financial services regulators in their recent round-up of feedback on AI and machine learning is that the industry highly values technologically agnostic regulation. There are sound reasons for such an approach, but it demands from firms (and their senior managers) a technologically literate application of governance, oversight and risk management. Consistent with observations made by the International Organization of Securities Commissions (IOSCO) in 2021, this does not necessarily require technical expertise from senior management overseeing AI control functions, but sufficient technical understanding given their ultimate responsibility and accountability for their firm's use of AI.
Simone Hui
Hong Kong
In the EU, the European Parliament is proposing new wording in the draft AI Act to require providers and deployers of AI systems to ensure a sufficient level of AI literacy among staff and others dealing with the operation and use of AI systems on their behalf. In Hong Kong, regulators have yet to require senior management themselves to have sufficient technical expertise but have stressed that AI governance committees are expected to include members with sufficient technical skills to advise senior management.
Regulators will expect senior managers to have access to necessary experience in order to meaningfully subject their firms' proposed uses of AI to appropriate oversight and monitoring. In addition, there will be an increased expectation on senior managers to have sufficient understanding of AI models and their data inputs to enable them to evaluate and interrogate model results and guard for bias, discrimination, and other poor customer outcomes.
Michelle Virgiany
Jakarta
This is apparent from the UK Prudential Regulation Authority's (PRA's) model risk management (MRM) principles for banks, which will come into force in May 2024. These principles have been developed with AI models in mind. Governance, one of the five principles of MRM, includes an expectation that the board provide challenge to the outputs of the most material models, including AI models. This will require them to understand:
In addition, firms should identify a relevant senior manager(s) to assume overall responsibility for the MRM framework, its implementation, and the execution and maintenance of the framework. Similar senior management responsibility is being consulted on in aspects of the US SEC's proposed new conflicts of interest rules for use of AI by broker-dealers and investment advisers.
In Singapore, the focus is on public-private collaboration to develop toolkits to assist firms in complying with the principles of fairness, ethics, accountability and transparency when assessing or developing governance frameworks for the use of AI. With the support of the Monetary Authority of Singapore (MAS), an industry-led whitepaper will be published in early 2024 which will cover the responsible use of generative AI from a banking perspective.
In countries where AI-specific regulations or guidelines are still forthcoming, senior managers should be mindful of existing laws and regulations which may be applicable to the use of AI generally and anticipate upcoming regulations when designing their business and operations and products by looking at how other jurisdictions such as the UK and EU have started governing AI.
Senior managers of global firms will also be expected to draw upon their experience in navigating cross-border regulatory frameworks as the global AI regulatory landscape continues to evolve.
1 The economic potential of generative AI: The next productivity frontier – McKinsey & Co, June 2023
Beyond the hype – Will new laws win trust in banks’ AI tools?
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs