Global Financial Services: The Sector’s Current AI Regulatory Landscape

Morgan Lewis
Contact

Morgan Lewis

As part of our Technology Marathon webinar series, partners Kristin Lee, Mike Pierides, and Steven Stone recently discussed the increasing regulatory focus by financial regulators on artificial intelligence (AI). Here are some of their key takeaways.

GIVEN THE FLUCTUATING LANDSCAPE OF REGULATORY POLICY ON AI, WHAT ACTIONS SHOULD FINANCIAL ENTITIES CONSIDER TAKING AT THIS TIME?

Regulators expect that financial entities will have in place appropriate controls, policies and procedures, and surveillance and monitoring to comply with existing regimes, including ensuring prudent operational risk management. In a similar vein, US regulators will not hold off on raising issues in examinations and investigations while their AI-specific policy approaches are evolving. Financial entities should look to leverage what they already have in place based on existing prudential requirements to document and effectively manage risks arising from their use of AI.

A significant (and growing) portfolio of software applications used by financial services firms already use AI and most, if not all, firms will have some form of AI embedded in their supply chain of software and services.

HAVE YOU OBSERVED ANY PARTICULAR TRENDS OR AI-USE CASES WITHIN THE FINANCIAL SERVICES SECTOR?

Spending on AI solutions is likely to increase significantly. Some estimates project total annual spending value on AI solutions in financial services to reach ~$97 billion in 2027, compared to ~$35 billion in 2023.

Additionally, most asset managers already use some form of generative AI in a range of business cases. According to a recent survey by Ignites, an asset management-focused media outlet, 59% of asset managers deploy generative AI for IT-use cases, such as code generation and debugging, and 56% deploy generative AI for marketing-use cases, such as drafting customized marketing materials.

One emerging use case for AI is in research. Buy-side firms, such as asset managers, want AI-enabled personalization of content in substance and delivery (e.g., small snippets, podcasts, charts, transcripts of expert calls, data feeds, machine-readable content to support quant fund-use cases, aggregators, and workflow solutions). Some large buy-side firms are already building in-house large language models (LLMs) for research and are requesting research providers’ consent to train those LLMs on content received from the research providers.

While AI in research sparks concerns from research providers around IP ownership and disintermediation of client relationships by research aggregators, it also gives hope of cutting down “mundane” parts of manual research, such as maintenance research and earnings recaps.

WHAT DOES THE REGULATORY FOCUS ON AI LOOK LIKE?

There are existing regulatory expectations around technology and third-party risk management, of which financial entities and their service providers should be familiar. In the United States, the Interagency Guidance on Third-Party Relationships: Risk Management (Interagency TPRM Guidance) published in June 2023 by the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, and the Office of the Comptroller of the Currency, sets out sound risk-management principles for banking organizations at all stages in the life cycle of third-party relationships, including planning, due diligence, contract negotiation, ongoing monitoring, and termination, as well as ensuring oversight and accountability. These expectations will apply to arrangements between third-party AI vendors and banks, and have also been used by nonbanking financial entities as a reference framework.

In Europe, the European Banking Authority’s Guidelines on Outsourcing Arrangements and the European Securities and Markets Authority’s (ESMA’s) Guidelines on Outsourcing to Cloud Service Providers set expectations at both an enterprise and transactional level that may extend to the procurement and use of AI tools provided by third-party vendors, depending on the context. Like the Interagency TPRM Guidance, the key principles from these expectations relate to planning and due diligence, security and privacy, oversight and accountability, business resilience and business continuity, exit strategies, and, for critical services, mandatory contract terms (with which many EU and UK financial institutions will be familiar).

Beginning in January 2025, the EU Digital Operational Resilience Act (DORA) will extend many of these principles to all technology services, cloud services, and software applications (ICT services) provided to EU financial entities and, in particular, will require specific contractual provisions with ICT services providers (external and intra-group providers).

There has been a significant volume of AI-specific publications of which financial entities should be aware. The US Department of the Treasury’s report on Managing AI-Specific Cybersecurity Risks in the Financial Services Sector, released in March 2024, in response to last October’s US Presidential Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, summarizes AI-use cases and risk trends, and it identifies opportunities, challenges, and best practices to address AI-related operational risk, cybersecurity, and fraud risk challenges. The Securities and Exchange Commission’s (SEC’s) 2024 Examination Priorities report noted AI as a focus area of emerging technologies, and that its recent enforcement focus has included “AI washing” by issuers, brokers, and advisers, as well as technology governance in a broader sense.

In the United Kingdom, sectoral regulatory guidance of the application of AI tools under existing legislation (rather than regulating the technology in its own right) continues to be the likely approach. As UK Financial Conduct Authority (FCA) Chief Executive Nikhil Rathi highlighted last year: “[W]hile the FCA does not regulate technology, we do regulate the effect on—and use of—tech in financial services.” Meanwhile, the FCA reiterated its technology-agnostic, outcome-focused approach in its recent update on its approach to AI.

As for the European Union, the EU AI Act has received much attention and is anticipated to come into force imminently, following its approval by the European Council on May 21 . Non-retail use cases in financial service will likely fall within the scope of “general purpose” AI, with a more limited set of requirements, mainly around transparency. EU regulatory guidance for such use cases is most likely to come under existing regimes, and transparency will be key. ESMA highlighted in its February 2023 report Artificial Intelligence in EU Securities Markets:

Complexity and lack of transparency, although arguably not inherent features of AI, may, in fact, represent barriers to the uptake of innovative tools due to the need to maintain effective human oversight and upskill management. Some firms appear to be limiting or foregoing their use of AI and ML algorithms because of operational concerns such as the compatibility of AI and their legacy technology.

WITH THIS IN MIND, WHAT KEY ISSUES ARE YOU SEEING FINANCIAL SERVICES FIRMS GRAPPLING WITH?

Starting with governance and risk management, it is critical that AI systems incorporate measures to ensure data security and integrity, auditability, and mechanisms to address data provenance (e.g., through proper tagging of data). This is to ensure that, among other things, risks of training on incomplete, outdated, or unverified sources, as well as risks of distortion and hallucinations, can be managed.

Protecting the confidentiality of firm and client information is another major concern—some publicly available AI solutions may be trained based upon queries and feedback received from personnel at financial entities that, if not prudently managed, could include confidential information.

Firms also need to ensure that they properly vet their AI-use disclosures for precision, accuracy, and balance.

In some cases, financial entities may act as AI providers where AI is used in their services, and this will impact how the regulations will fall, the contractual positions they take with customers, and the policies and procedures they need to have in place.

Finally, we are seeing firms grappling with developing AI-specific policies and procedures. Firms should take a holistic approach and consider what policies and procedures they already have in place covering the procurement and use of AI and amend those. As we note above, AI will likely already be embedded in a firm’s supply chain of software and services.

Learn more about our Technology Marathon webinar series >>

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morgan Lewis | Attorney Advertising

Written by:

Morgan Lewis
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Morgan Lewis on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide