AI in UK Financial Services - What’s on the Horizon?

Latham & Watkins LLP

As regulatory thinking evolves, firms must ensure that any current or planned use of AI complies with regulatory expectations.

 

As financial services firms digest FS2/23, the joint Feedback Statement on Artificial Intelligence and Machine Learning issued by the FCA, Bank of England, and PRA (the regulators), and the UK government hosts the AI Safety Summit, we take stock of the government and the regulators’ thinking on AI to date, discuss what compliance considerations firms should be taking into account now, and look at what is coming next.

The FCA recently highlighted that we are reaching a tipping point whereby the UK government and sectoral regulators need to decide how to regulate and oversee the use of AI. Financial services firms will need to track developments closely to understand the impact they may have. However, the regulators have already set out how numerous areas of existing regulation are relevant to firms’ use of AI, so firms also need to ensure that any current use of AI is compliant with the existing regulatory framework.

Background

The regulators’ working characterisation of AI (as set out in the joint Discussion Paper titled “Artificial Intelligence and Machine Learning” (DP5/22)) is that “AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence”. However, the regulators note in FS2/23 that respondents to DP5/22 were not in favour of creating a regulatory definition of AI.

AI has long been a focus area for the regulators, with the FCA and Bank of England publishing work on big data and machine learning and assisting in the development of suggestions for good practice by experts across the industry. This focus has heightened in recent years as the use of, and use cases for, AI in financial services have increased significantly. The regulators sought feedback in DP5/22 on the potential benefits and risks of AI in financial services and the relevance of existing areas of regulation to AI. FS2/23, which was published on 26 October 2023, sets out industry responses to DP5/22, but does not include policy proposals at this stage, or provide further detail on the future regulatory approach.  

The regulators’ intensifying focus on AI has occurred against a backdrop of ever-growing interest in AI by the public, industry, and lawmakers. The UK government published a White Paper on its approach to regulating AI in March 2023, and is hosting the AI Safety Summit on 1 and 2 November to bring together a number of countries, technology organisations, and academia to consider risks created or significantly exacerbated by AI systems. Other regulators, such as the Information Commissioner’s Office (ICO), are publishing guidance on AI of relevance to financial services firms. Further, initiatives such as the Digital Regulation Cooperation Forum (DRCF) — which includes the FCA, ICO, Ofcom, and the Competition and Markets Authority — are working to explore emerging regulatory issues. The government recently announced that the DRCF would launch the DRCF AI and Digital Hub next year to provide tailored advice to businesses on how to meet regulatory requirements for digital technology and AI.

What is the UK government’s approach to AI?

The UK government has announced its intention to take an “agile and iterative” approach to regulating AI, conscious of the need to harness potential whilst protecting against the risks. In its White Paper, the government set out a “proportionate and pro-innovation” regulatory framework that it hopes will strengthen the UK’s position as a global leader in AI. This contrasts with the position in the EU, where legislators are in the process of agreeing a much more rigid legislative framework. The government is mindful of the need to ensure international compatibility between regulatory approaches, but also of positioning the UK as a technology-friendly country.

The government explained in its White Paper that it does not plan to introduce new legislation at this stage, rather it proposes a framework centred on five ‘cross-cutting’ principles:

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Sectoral regulators are responsible for implementing these principles in an appropriate way, within their existing remits. However, the government also anticipates ultimately imposing a statutory duty on regulators requiring them to have due regard to the principles.

The risk-benefit analysis

A recent FCA speech noted, “We are at a key moment now – we have options around deciding where to take AI”. In this context firms will want to understand the regulatory direction of travel and how it will impact their adoption of AI going forward.

AI has huge transformative potential for the financial services sector; it could be used to improve operational efficiency, create more innovative products, hyper-personalise products and services, improve customer service, and tackle financial crime, to name but a few use cases. Yet it also presents numerous potential risks to consumers, firms, and the overall stability and integrity of the financial system and markets. A recent report by the Alan Turing Institute scrutinised the potential risks and benefits associated with the use of AI in financial services.Some of the key risks identified include:

  • Consumer Protection: AI has the potential to be used to assist in better understanding customers’ needs and characteristics. However, AI could also result in exploitation of such characteristics, behavioural biases, or even customer vulnerability. Use of AI could result in certain customers being excluded from particular products, higher pricing for some customers, unlawful discrimination, and customers being sold or recommended unsuitable products. Respondents to DP5/22 considered this a key risk that should be an important focus for regulation and supervision.
  • Competition: There is a risk that AI models create a platform for firms to engage in collusive behaviour or achieve anti-competitive market outcomes, as firms could be able to monitor and enforce agreed pricing or market strategies and to conceal such behaviours behind complex systems. Use of AI could also lead to inadvertent anti-competitive behaviours if, for example, dynamic pricing systems follow the prices of competitors.  
  • Safety and soundness: The use of AI systems is contingent on their security, reliability, and robustness, which has implications for the safety and soundness of firms. For example, AI can make it more difficult to identify weaknesses in risk models and such weaknesses could undermine the effective and efficient management of prudential risks.
  • Financial stability and market stability: The use of AI has the potential to increase already existing risks to financial stability. Over-use of AI could lead to imbalances in the market, particularly where the underlying data being fed into AI systems is poor or modelling techniques have used the same data producing uniformity across modelling sets. Similarly, markets are also prone to “bubbles” or crashes if general sentiment analysis or social media signals were used at scale in automated trading. There are further risks associated with the effective monitoring and risk management activities that occur via third-party providers.
  • Financial crime: Many firms now are using intuitive AI systems in an effort to combat financial crime, such as abusive trading practices and market abuse. However, weakness and/or vulnerability to cyberattacks can render such AI systems inefficient and potentially less reliable compared to existing infrastructure. There is also the inherent risk that AI systems are used to facilitate financial crime, either deliberately or inadvertently. Trading strategies or inside information could be accessed and acted upon by AI systems in a form of market manipulation.

How will the UK financial services regulators approach AI?

In a recent speech the FCA emphasised that “while the FCA does not regulate technology, we do regulate the effect on – and use of – tech in financial services”.

The regulators intend to deploy a regulatory framework that is principles-based, technology-agnostic, and outcomes-driven, so that it is as flexible as possible. Like the government, the regulators will be focused on trying to find the right balance. As Nikhil Rathi, Chief Executive of the FCA, commented in a speech earlier this year, “Any regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom and a loss in trust and confidence, which when it happens can be deleterious for financial services and very hard to win back”.

As evidenced in DP5/22, one of the most significant questions for the regulators is whether AI can be managed through clarifications of the existing regulatory framework, or whether a new approach is needed. Unsurprisingly, FS2/23 concludes that further guidance and regulatory coordination is needed in this space and so further action by the regulators is inevitable. They will also no doubt keep their new secondary objective to promote economic growth and international competitiveness front of mind when developing their proposals, as the government clearly sees the UK’s approach to regulating AI as strategically important. Respondents to DP5/22 flagged the importance of maintaining agility through, for example, periodically updated guidance and examples of best practice, as well as highlighting the importance of alignment between the regulators and their domestic and international counterparts.

Which areas of existing regulation are relevant to AI?

The regulators have identified the following as key areas of the existing regime that may be leveraged to mitigate against the potential risks created by the adoption of AI in UK financial services. However, as acknowledged in FS2/23, industry participants would welcome further guidance and the regulators are considering whether existing regulations would benefit from an extension to specifically encompass AI.

  • Consumer Protection: The FCA’s Consumer Duty sets higher expectations for the standard of care that firms provide to customers and requires that firms ensure they “act to deliver good outcomes for retail customers”, regardless of the technology used. The FCA Board has signalled that this is a key area of attention within the regulator, highlighting that firms should in particular consider how to comply with the requirement to avoid causing “foreseeable harm” under the Consumer Duty (see this related Latham blog post). The FCA also has general powers under the Equality Act 2010 in respect of consumer law to protect individuals from discrimination on the basis of nine protected characteristics, and this protection would encompass any discrimination by way of AI technologies.
  • Competition: The FCA has a secondary objective to facilitate effective competition in financial markets. Alongside this secondary objective, the FCA has concurrent competition powers allowing it to investigate potential breaches of the Competition Act 1998 in the financial services sector. The FCA also has powers permitting it to carry out market studies within the financial services sector, following which the FCA can impose remedies such as rule-making or firm-specific measures. These competition powers could be used to address competition-related concerns arising from the use of AI in financial services.
  • Data: There are broad and wide-ranging provisions contained within the UK GDPR which set out the accountability of data controllers and processors and are applicable in the context of AI, including the regulation of personal data, data transfers, and data processing, and use. This is supplemented by helpful guidance on AI from the ICO. This is further supported by the provisions of the Money Laundering Regulations 2017 and the Payment Services Regulations 2017, which overlay additional data security provisions.
  • Model risk management: For banks, the PRA has prepared new guidance in relation to creating effective model risk management (MRM) frameworks. MRM is of increasing importance as a framework to assist firms in managing and mitigating AI-related risks. New models are being developed as the use of AI becomes more prevalent and areas such as surveillance, fraud detection, and text analytics are now being weaved into existing MRM frameworks. However, the current scope of MRM regulation is limited, so this is one area in particular where the existing framework may be amended to include AI-specific provisions.
  • Governance: The FCA Handbook contains a number of high-level rules, principles, and guidance which would be applicable to firms’ use of AI systems. Of particular relevance are rules relating to internal governance, systems, and controls, internal audit, financial crime, risk control, outsourcing, and record-keeping. The SMCR is also of relevance in relation to a firm’s use of AI, as Senior Managers may be held accountable for failures in areas for which they are responsible. There is now consideration as to whether it would be appropriate to introduce a dedicated SMF or Certification Function for AI under the SMCR, although respondents to DP5/22 did not consider this necessary.
  • Operational resilience and outsourcing: Much work has been done recently to develop a coordinated regulatory framework in relation to operational resilience, outsourcing, and third-party risk-management in the UK and it is expected that this will be supplemented further to account for the emerging use of AI technologies. As noted in DP5/22, many of the principles, expectations, and requirements for operational resilience may provide a useful basis for the management of certain risks posed by AI and support its safe and responsible adoption. The new framework for the supervision of critical third party arrangements introduced by the Financial Services and Markets Act 2023 will be relevant to critical third parties involved in the delivery of AI technologies or the data being trained into AI technologies, and respondents to DP5/22 noted this as an area in which more regulatory guidance would be useful.

What are the next steps?

In advance of any further rules in relation to AI, firms using AI should consider whether the existing regulatory framework applies to their use and whether they are comfortable that they are compliant.

The pace of developments is set to accelerate over the coming weeks and months, with the regulators likely to act upon the recommendations in FS2/23 and further develop their wider programme of work related to AI, including the AI Public Private Forum. The government is also due to issue a response to its White Paper this autumn, and will confirm the cross-cutting principles that the regulators will need to implement alongside an AI regulation roadmap setting out the implementation of the regulatory framework. The outcome of the global AI Safety Summit will be key in determining the government’s approach. Following this, the regulators will be able to move forward with setting out how they propose to apply the cross-cutting principles within the financial services regulatory framework, and presumably with some clearer policy proposals.

The issues relating to the deployment of AI are, of course, not confined to the UK but arise for financial services firms globally. It is likely that we will see further legislation and guidance in many other jurisdictions, notably in the EU, and firms with global footprints will need to carefully navigate any differing requirements.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Latham & Watkins LLP | Attorney Advertising

Written by:

Latham & Watkins LLP
Contact
more
less

Latham & Watkins LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide