Series 2: How to Determine Your Risk Category and What It Means to Be ‘High-Risk’

Goodwin

The new EU Artificial Intelligence Act (“AI Act”) is a risk-based framework, ranging from outright prohibitions on certain types of systems to light regulation of others. The AI Act sets out to define AI systems, their levels of risk according to use, and, importantly, the risk-calibrated degree of responsibility of each player in the chain. To this end, it classifies organizations into seven roles according to their function in the chain and assigns them appropriate responsibilities: provider, deployer, importer, distributor, operator, manufacturer, and affected person. The most significant roles in terms of regulatory obligations are those of deployers and providers.

  • You are a deployer if you use an AI system for commercial purposes. You are required to guarantee that the AI systems you use do not present significant risk to fundamental rights or safety.
  • You are a provider if you develop an AI system or general-purpose AI model and place or put the system into service under your own name or brand.  You bear the responsibility for ensuring your AI system or general-purpose AI model meets the requirements of the AI Act.
  • You can be designated a provider of a high risk AI system, even if you did not develop it, if you place your name or trademark on the AI system, make substantial modifications to it, or modify its intended purpose, making it a high-risk system.

Providers and deployers of AI systems need to consider which risk category their AI system falls under to ensure compliance with the relevant obligations of the AI Act, such as risk management, data quality, and human oversight. Certain AI systems will not be subject to the AI Act, such of those used for military, defense, or national security purposes, as well as systems or models that have been developed for the sole purpose of scientific research and development. Certain exceptions also apply to free open-source AI systems and general-purpose AI models.

The Risk Categories explained:

  • Prohibited AI Systems: AI systems in this category are not allowed to be placed on the EU market or put into service in the EU. Prohibitions include systems that conduct social scoring or those that exploit vulnerabilities, such as disability or age; real-time biometric identification; or the creation of facial recognition datasets through untargeted data scraping.
  • High-Risk AI Systems: These systems are permitted on the EU market and can be used in the EU as long as providers and deployers adhere to a set of rigorous compliance obligations. Read on for more insight into what is high risk.
  • General-Purpose AI Models: These are models that are trained with significant volumes of data, displaying significant generality, and that have the capability to serve a variety of purposes. They can be integrated into a host of downstream applications. The best-known example is ChatGPT.
  • Limited Risk: This category includes certain AI systems that involve human interaction with AI, such as chatbots or AI-generated content. The obligations for these systems are limited to ensuring end-user transparency.
  • Minimal Risk: This is a residual category that includes AI systems that do not fall into any of the other categories, such as video games or spam filters, which may adopt voluntary best practices through codes of conduct.

Providers and deployers of all AI systems, including minimal-risk AI systems, are required to take measures to ensure that personnel dealing with the operation and use of AI systems attain a sufficient level of AI literacy. All parties in the AI ecosystem are also encouraged to implement specific ethical principles that reflect EU fundamental rights. These principles are not enforced as mandatory, but we expect them to influence and underpin regulatory guidance and codes of conduct.

What is ‘High Risk’?

You can use our interactive flow chart to understand at a high level when your AI system will be classified as high risk. As a quick guideline, there are two ways to be classified as high risk:

  • If an AI system is listed as a product subject to the EU’s harmonization regime, requiring a third-party conformity assessment, or is a safety component of such a product
  • If your use case for an AI system is listed as high risk in Annex III of the AI Act

You can rebut the presumption that applies as a result of being listed as high risk in Annex III if you can evidence that your AI system does not pose a significant risk of harm to the health, safety, or fundamental rights of individuals (unless you are profiling individuals).

We will be writing about the impact of the AI Act on AI systems used in specific sectors, including life sciences and financial services, later in this series, and we will write separately about the responsibilities of providers of general-purpose AI models.

Your Obligations as a Provider of a High-Risk AI System

Providers of high-risk AI systems bear the majority of obligations under AI Act, as summarized below. Certain obligations are designed to ensure that downstream users have sufficient information to meet their obligations. Providers of high risk AI systems must:

  • Implement continuous risk management protocols throughout the AI system’s life cycle, addressing identified risks with appropriate measures
  • Use quality datasets to develop high-risk AI systems, actively mitigating biases and appropriately handling sensitive personal data where necessary
  • Maintain comprehensive technical documentation, demonstrating compliance and enabling the authorities to assess compliance with the AI Act requirements
  • Implement systems to allow for automatic event logging to ensure traceability and monitoring
  • Achieve transparency in the operation of AI systems, enabling deployers to understand their outputs and providing clear usage instructions
  • Design AI systems to have effective oversight by human operators to mitigate risks, incorporating measures for comprehension, interpretation, and intervention
  • Ensure suitable levels of accuracy, robustness, and cybersecurity throughout the life cycle of an AI system, addressing biases, errors, and potential adversarial attacks
  • Operate a quality management system for the entire life cycle of an AI system
  • Maintain technical documentation and logs generated by an AI system
  • Ensure an AI system undergoes the relevant conformity assessment before being placed on the market or into service and affix the CE mark
  • Register the AI system in a public database maintained by the European Commission
  • Take corrective action if an AI system breaches the AI Act, and inform parties in the supply chain of such a breach
  • Notify EU regulators of serious incidents or if an AI system poses risks to health and safety or to EU fundamental rights

Your Obligations as a Deployer of a High-Risk AI System

As a deployer of a high-risk AI system, you are responsible for protecting end users from harm in relation to such an AI System. You must:

  • Carry out an assessment of the impact that the use of an AI system may have on fundamental rights
  • Take appropriate measures to ensure AI systems are used in accordance with accompanying instructions
  • Provide human operators with relevant competence and training needed to be assigned as human oversight for an AI system
  • Ensure that input data you control is relevant and representative, having regard for the intended purpose of the AI system
  • Monitor the AI system and report to the provider, to other parties in the supply chain, and to the EU regulators serious incidents and risks to health and safety or to EU fundamental rights
  • Retain logs generated by an AI system
  • Notify natural persons of the use of AI systems

Conclusion

The AI Act is a risk-based framework, applying vastly different legal obligations depending on the role a party plays in the AI supply chain, as well as on the risk category an AI system belongs to. The main thrust of the AI Act applies to providers — and to a lesser extent deployers — of high-risk AI systems. If initial analysis suggests that you may fall into one of these categories, we recommend closely following guidance and legal advice to ensure that your AI systems comply with the new law.

We would like to thank Alice Bennett for their contribution to this alert.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Goodwin | Attorney Advertising

Written by:

Goodwin
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Goodwin on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide