Navigating the EU AI Act: Implications for Financial Institutions

Dechert LLP
Contact

Dechert LLP

Key Takeaways

  • EU institutions are aiming to reach an agreement on the final form AI Act through ongoing trilogue negotiations by the end of 2023.
  • The AI Act takes a risk-based approach and categorises AI systems into four risk levels: minimal or no risk, limited risk, high risk and unacceptable risk. Unacceptable risk AI systems will be strictly prohibited, with obligations then tapering based on risk level.
  • The European Parliament’s proposal upped the ante on fines from the European Commission’s draft, with an increase to a potential €40 million or, up to 7% of total worldwide annual turnover for the previous financial year, whichever is higher.
  • In light of the trilogue negotiations, financial institutions should remain vigilant and up-to-date on proposed changes and consider taking steps now to assess their AI systems and steps that may need to be taken to navigate the evolving regulatory landscape effectively.

The EU’s proposed regulation on artificial intelligence ("AI") (the "AI Act") aims to establish a harmonised framework that balances both the benefits and risks of artificial intelligence ("AI") systems within the Union. With the AI Act anticipated to be finalised by the end of 2023, it is important for financial institutions located both within the and outside the EU to understand its potential effects and to start to prepare to comply with any applicable requirements.

Background

First proposed in April 2021, the AI Act has recently moved into the final stage of the legislative process. This stage consists of trilogue negotiations – i.e., informal tripartite discussions - between representatives of the European Parliament ("EP"), the Council of the EU, and European Commission which will determine the AI Act's final form. When adopted, the AI Act will be directly applicable across EU member states and become part of national law in each of those member states. The first operational trilogue is scheduled for the end of July 2023. Spain, holding the European Council presidency since 1 July 2023, has identified AI as a top priority and aims to reach an agreement on the AI Act before the end of 2023, meaning that obligations for AI system providers, importers, distributors and deployers may potentially apply as early as 2025 following a two-year implementation period.When adopted, the AI Act will be directly applicable across EU member states and be applicable to all AI systems placed on the market or used in the EU.

This OnPoint addresses the AI Act in its most recent form as proposed by the EP in June 2023, but the AI Act remains subject to change as a result of the trilogue negotiations.

Overview of the EU AI Act

The AI Act takes a risk-based approach and categorises AI systems into four risk levels: minimal or no risk, limited risk, high risk and unacceptable risk.

  • Minimal risk: includes applications that are already widely available, such as spam filters that will be largely unregulated.
  • Limited risk: includes chatbot systems using generative AI, which have become increasingly popular around the world within the past year. These AI systems should comply with minimal transparency and disclosure requirements, ensuring users are aware that they are interacting with AI systems and allowing users to make informed decisions about their interactions.
  • High-risk: includes AI systems which fall within one or more critical areas and use cases referred to in the AI Act, such as those with potential effects on critical infrastructure, education, or systems capable of influencing voters in elections. These AI systems present a significant risk of harm to health, safety, and fundamental rights or, in some cases, to the environment and will be subject to additional specific conditions. Such conditions will differ depending on the circumstances but may include regular risk assessments, appropriate human oversight measures to minimise risk (being made aware of the risk of automation or confirmation bias) and maintaining technical documentation to ensure all identified risks are mitigated against.
  • Unacceptable risk: includes the most intrusive forms of AI systems, for example, those used for social scoring, cognitive behavioural manipulation aimed at vulnerable groups, and real-time remote biometric identification systems in public places. These AI systems pose a clear threat to the safety, livelihoods and rights of people and will be strictly prohibited.

The AI Act establishes a set of six 'general principles' that are applicable to all operators developing and using AI systems. The principles include: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination, and fairness, and (6) social and environmental well-being.

Business that fail to comply with the AI Act can face substantial administrative fines of up to €40 million or, up to 7% of total worldwide annual turnover for the previous financial year, whichever is higher.

The AI Act prohibits penalties and related litigation costs from being included in contractual clauses or other burden-sharing agreements between AI providers, distributors, importers, deployers, or other third parties.

The current regulatory landscape relating to AI governance is a topic of intense discussion. As the AI Act is expected to be adopted later this year, financial institutions employing AI systems in the EU market must brace for change and stay alert to updates on forthcoming negotiations. For businesses operating in the EU market, it will be important to understand and adhere to these principles to effectively mitigate risks, prevent penalties, and maintain their market reputation.

Impact on Financial Institutions

The impact of the AI Act on financial institutions will vary based on specific AI applications, particularly since the financial services sector holds an uncertain position within the AI Act. Finance is not explicitly listed among the high-risk systems, but AI-driven lending platforms used in essential private and public services, such as credit scoring, must comply with high-risk AI system requirements. These platforms can determine an individual’s access to financial resources and potentially restrict them from obtaining loans.

In contrast, AI-powered customer service chatbots, categorised as limited-risk AI systems, will face fewer obligations. Financial institutions using chatbot functions should comply with transparency standards and ensure users are aware they are interacting with an AI system, enabling them to make informed decisions around whether to continue to interact with the AI system or to withdraw. Although the AI Act seeks to encourage safe and ethical AI use, critics worry that it could impede innovation, raise compliance costs, hinder AI adoption for small businesses and negatively affect competition in the sector. These concerns could potentially prompt innovative companies and investors to relocate their activities outside of the EU. However, by demonstrating that the EU is committed to the responsible development of AI, the AI Act could potentially help to attract investment to EU-based firms developing ground-breaking AI systems and applications.

Debates surrounding the various definitions of AI-based systems and concerns about the potential risks of rapidly evolving technology led to delays in announcing the AI Act. Presently, the draft AI Act proposed by the EP has revised the European Commission’s definition of an 'AI system' to: "machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments". If adopted, this broad definition, focusing on machine-learning capabilities, would move away from automated decision-making concepts and extend the AI Act’s scope to include foundation models. Foundation models, such as the large-language models (LLM) driving the current generation of chatbots, are trained on extensive data sets and can produce a wide range of outputs. In contrast, general-purpose AI systems can be used in and adapted to a wide range of applications. Given the considerable uncertainty surrounding the speed and evolution of foundational models in the field of AI, they would face additional measures, including disclosing AI-generated content and preventing the creation of illegal content.

Recommendations for Financial Institutions

In light of ongoing regulatory developments surrounding the AI Act, financial institutions should remain vigilant and up to date on these matters. Staying informed will enable businesses to navigate the evolving regulatory landscape effectively and to remain compliant. We recommend taking the following preliminary steps:

  • Map current AI system usage: Determine whether the AI systems you are using will be designated as minimal risk, limited risk, high risk or unacceptable risk to ensure an appropriate level of compliance.
  • Develop an internal AI governance framework and remain informed: Establish clear guidelines and processes for the development, deployment, and monitoring of AI systems to ensure compliance with the AI Act. In addition, keep up-to-date with the latest regulatory, policy and market developments relating to the AI Act to ensure compliance when the AI Act comes into force.
  • Prepare to comply: As the AI Act is expected to be finalised by the end of 2023, financial institutions should begin preparing to ensure compliance upon its implementation. For instance, financial institutions using high-risk AI systems must conduct risk assessments and ensure fairness, transparency, and accountability.
  • Educate employees: Offer training to employees around the obligations associated with AI system use and promote awareness of the AI Act's requirements by encouraging employees to engage in safe, ethical AI practices within the organisation.

For the AI Act to be effective, the EU must offer additional guidance to institutions on testing AI systems, evaluating their societal impact, and governing them efficiently. Nevertheless, the AI Act is expected to have a significant impact on financial institutions as AI technologies underpin numerous innovations in the structure and operations of financial markets around the globe. By assessing AI systems, taking steps to comply, and staying informed about the latest developments, financial institutions can navigate the AI Act and ensure the safe, ethical use of AI in their operations.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide