Generative AI and Financial Services: A Recent View From the UK Regulator

Goodwin
Contact

Goodwin

In a recent speech, Nikhil Rathi, CEO of the UK Financial Conduct Authority (FCA), set out the FCA’s latest views on the role of artificial intelligence (AI) in financial services. The speech highlights many benefits but also shines a light on the risks for the FCA and builds on the points made in a joint paper published by the Bank of England and FCA on AI. 

Both the speech and the joint paper examine AI generally and do not focus specifically on generative AI, which can generate apparently original content, such as text, in response to requests or prompts. Rapid advances in generative AI systems, as manifested by ChatGPT, have generated particular interest in their application for all industries, including financial services. Yet the enthusiasm for generative AI and its potential uses is tempered by voices calling for restraint. In the speech, Rathi refers to specific risks connected to generative AI. Moreover, his general comments on AI, many of which give a sense of the direction in which regulations may evolve, are also relevant to generative AI.    

Some General Context 

The UK government issued a white paper, noted in our alert in which we outline the UK’s proposed approach to regulating AI. The white paper proposes a principle-based approach, supervised by sector regulators, such as the FCA, that are tasked with enforcing AI regulation developed for their respective sectors. There currently are no proposals for a general AI law in the UK, unlike in the EU (see our alert on the topic. While we wait for the UK to move forward with formal AI regulation, the speech and joint paper make it clear that existing regulatory rules can impose liability on regulated firms that use AI and the senior managers within those firms. 

While the speech does not mention privacy, the joint paper highlights the important role of privacy and data protection in the context of AI. The role of privacy and data protection cannot be understated and will remain a key issue for both the regulated and unregulated sectors and one that we cover in an alert.

Specific AI Risks

The joint paper focuses on risks in the data, models, and governance layers of AI systems within categories based on the FCA’s objectives, including consumer protection, competition and financial stability. It highlights that the use of AI in financial services may amplify existing risks and introduce novel challenges.

In the speech, Rathi points out various AI-related risks, including a recent online scam video using a deepfake: a computer-generated video of respected personal finance campaigner Martin Lewis endorsing an investment scheme. Rathi also cites cyber fraud, cyberattacks, and identity theft as increasing in scale, sophistication, and effectiveness. Perhaps unsurprisingly, he makes the point that as AI is adopted further, investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time.

Firms’ Expected Response to These Risks: A Full Understanding of AI?

Rathi asks whether to make a “great cup of tea, one needs to understand the intricacies of Brownian motion and energy transfer,” or does one merely need to know they have made a decent cup of tea? He notes that many in the financial services industry feel that they want to be able to explain their AI models as a way of reassuring their customers and protecting their reputations. This apparent indifference to understanding AI is surprising, given explainability and transparency are key ethical principles emerging as a common theme across many jurisdictions in relation to AI, including in the UK.

Rathi’s stance seems unclear, however, when he focuses on the risks of invention by AI models of fake case studies giving rise to “hallucination bias” and risks of improper data inputs giving rise to data bias. These can have exponentially worse effects when coupled with AI, which augments the bias. He asks, however, whether a human decision-maker is always more transparent and less biased than an AI model and concludes that “both need controls and checks.”

Accountability and Responsibility for AI Models?

Rathi’s seeming ambivalence about whether it is necessary to understand AI models is echoed in his statement that the FCA still has “questions to answer about where accountability should sit: with users, with the firms, or with the AI developers? And we must have a debate about societal risk appetite.” 

He makes a point that has resonated throughout discussions about the regulation connected with new technologies, including blockchain solutions (see the UK government consultation pm the regulation of cryptoassets, which we discuss our alert), any regulation must be proportionate, maximizing innovation while minimizing risk. While the proportionality principle can be easily identified, as Rathi does, it will have to be articulated on a case-by-case basis, which cannot always provide for certainty or predictability.

The FCA’s Extended Role? 

Questions about accountability and responsibility in the industry for AI models also shine a light on the FCA’s role. Rathi articulates this clearly: “While the FCA does not regulate technology, we do regulate the effect on — and use of — tech in financial services.”

This is, of course, correct when looking at the powers Parliament has given the FCA, and it also shines a light on questions of expertise, although expertise can be addressed through recruitment. The rise of the importance of technology and currently unregulated third-party technology providers who offer “critical services” to financial institutions has, however, resulted in an extension of powers for the FCA and Prudential Regulation Authority (PRA) under the recently enacted Financial Services and Markets Act 2023 (FSMA 2023), as noted in our related alert

If there is a provider of generative AI models used by a large number of financial institutions or a small number of large or important financial institutions, that provider may become subject to the jurisdiction of the FCA or PRA under the new powers that FSMA 2023 introduces.      

General Rules for AI Providers?

As an adjunct to an extension of the FCA’s powers, Rathi refers to the Senior Managers and Certification Regime (SMCR) for financial institutions and its role in individual accountability. He goes on to refer to “suggestions in Parliament that there should be a bespoke SMCR-type regime for the most senior individuals managing AI systems, individuals who may not typically have performed roles subject to regulatory scrutiny but who will now be increasingly central to firms’ decision making and the safety of markets.” 

He says this will be an important part of the future regulatory debate. This underscores the fact that, due to the ubiquity of generative and other AI, the discussion of regulatory norms will likely be general in nature, even if the FCA and other sector regulators are given primary jurisdiction in enforcing those norms. 


[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Goodwin | Attorney Advertising

Written by:

Goodwin
Contact
more
less

Goodwin on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide