Now that the European Union’s Artificial Intelligence (AI) Act has entered into force, the real work begins putting its obligations into practice. This article explores five compliance steps to take now to operationalize the AI Act.
Scope of the Act
As covered in this previous ACI article, the AI Act regulates two categories of AI:
- AI systems: Machine-based systems designed to operate with “varying levels of autonomy and that may exhibit adaptiveness after deployment, and that…infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
- General-purpose AI models (GPAI models): AI models “trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks … and that can be integrated into a variety of downstream systems or applications.”
The AI Act applies to a variety of stakeholders who make up the AI supply chain: providers, deployers, authorized representatives, importers, and distributors. Under the AI Act, providers are those that develop an AI system or GPAI model and place it on the EU market or that put an AI system into service under its name or trademark. Deployers are the users of AI systems.
Compliance Steps
The AI Act extends extraterritorially by applying to providers inside and outside the EU, as well as applying to providers and deployers outside the EU where the “output” of AI systems is “used in the EU.” Thus, all covered providers and deployers, both inside and outside the EU, should consider the following baseline compliance steps.
Step 1: Establish a cross-functional AI governance team. To aid in compliance with the AI Act’s obligations, covered providers and deployers should begin by assembling a cross-functional AI team before putting mechanisms around the design, development, and operation of AI systems and GPAI models.
The AI governance team should combine the skills and knowledge of legal, compliance, human resources, data privacy, information technology, product engineering, research and development, and possibly other functions, depending on the needs of each organization.
Moreover, just as many compliance departments utilize compliance “champions” or “liaisons” to instill ethics and compliance principles into the organization, it may be just as advantageous to appoint an AI champion to serve as a point person if questions arise regarding new AI policies, processes, or procedures, and generally to promote ethical AI practices. Many large organizations have a data privacy officer, for example, who could serve as this point person.
Step 2: Take the pulse of the organization’s AI governance framework. The AI Act’s central regulatory framework covers four risk tiers: prohibited AI practices, high-risk AI systems, transparency risk, and minimal-risk AI systems. Further details on each risk tier are described in this previous ACI article.
Vishnu Shankar, former head of legal at the Information Commissioner’s Office and now a partner in the London and Brussels offices of Morgan Lewis, stressed how critical it is to accurately assess the risk tier under which the AI system or GPAI model falls because that will dictate the compliance obligations that follow.
Shankar provided the following risk assessment questions to consider:
- Are any of the business’s AI-enabled technologies, applications, or products characterized as an AI system or GPAI model?
- Does the business perform any of the AI Act’s functions, such as in the capacity of a provider or deployer?
- For non-EU companies, does the AI Act apply extraterritorially?
- Under which risk tier does the system fall?
- Do any prohibited AI practices come into play?
- What exemptions, if any, apply?
Step 3: Foster AI literacy. Effective Feb. 2, 2025, the AI Act obligates providers and deployers of AI systems to “take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
AI literacy obligations apply to non-technical employees as well. According to an example provided by the Dutch supervisory authority, AI literacy “means that an HR employee must understand that an AI system may contain biases or ignore essential information that may lead to an applicant being or not being selected for the wrong reasons.”
The AI Act indicates that more guidance is forthcoming. Recital 20 states that, in applying the AI Act, AI literacy “should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement.” Such responsibility rests with the Commission “to promote AI literacy tools, public awareness, and understanding of the benefits, risks, safeguards, rights, and obligations in relation to the use of AI systems.”
Step 4. Implement the AI Act’s compliance obligations. Providers of high-risk AI systems should begin taking the necessary steps now, if they haven’t started already, implementing the following compliance requirements described in the AI Act:
- Establish, implement, document, and maintain a risk management system throughout the high-risk AI system’s lifecycle (Article 9).
- Conduct data governance based on training, validation, and testing datasets that meet an extensive list of criteria set out in the AI Act (Article 10).
- Draw up technical documentation before placing a high-risk AI system on the market or putting it into service (Article 11) that provides national competent authorities and notified bodies with the necessary information to assess the AI system’s compliance with the Act’s requirements, including the elements in Annex IV.
- Design and develop high-risk AI systems capable of logging events (Article 12). The Act describes what types of events the logging capabilities should record.
- Design and develop high-risk AI systems that enable deployers to interpret a system’s output, and use it appropriately (Article 13). The Act provides further details on what information those instructions must include.
- Design and develop high-risk, AI systems that humans can oversee to prevent or minimize health and safety risks, or violations of fundamental rights (Article 14).
- Design and develop high-risk AI systems that achieve an appropriate level of accuracy, robustness, and cybersecurity (Article 15).
Providers of GPAI models must abide by a different set of compliance obligations, which will come into force in August 2025. Article 56 of the AI Act outlines the Code of Practice, which providers of GPAI models can follow to demonstrate compliance with the Act’s obligations until harmonized standards are adopted.
Additional compliance obligations
GPAI model providers should refer to Articles 53 and 55, which describe the minimum obligations that need to align with the Code of Practice. For example, Article 53 obligates providers to draw up and keep up-to-date technical documentation of the model, including its training and testing process and the results of its evaluation.” Annex XI describes the minimum type of technical information to provide, upon request, to the AI Office and the national competent authorities.
Article 53 further obligates providers to keep up-to-date information and documentation to give to providers of AI systems who seek to integrate the GPAI models into their AI systems. The information should enable providers of AI systems to “have a good understanding of the capabilities and limitations” of the GPAI model and to comply with their obligations under the AI Act. It should also contain, at a minimum, the elements set out in Annex XII.
Other compliance obligations include creating a copyright policy to comply with the EU’s copyright law; and making publicly available a “sufficiently detailed summary” of the content used for training of the GPAI model, in line with a template to be provided by the AI Office.
Article 55 describes the minimum level of additional compliance obligations that apply to GPAI model providers that pose systemic risks. Such compliance obligations include:
- Perform model evaluation per standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model to identify and mitigate systemic risks;
- Assess and mitigate possible systemic risks at EU level, including their sources, that may stem from the development, placing on the market, or the use of GPAI models with systemic risk;
- Track, document, and report “without undue delay” to the AI Office and, as appropriate, to national competent authorities, “relevant information about serious incidents and possible corrective measures to address them”;
- Ensure an adequate level of cybersecurity protection for the GPAI models that pose a systemic risk and the physical infrastructures of those models.
Articles 53 and 55 state that providers of GPAI models, including those that pose systemic risk, who do not adhere to an approved Code of Practice or don’t comply with a European harmonized standard “shall demonstrate alternative adequate means of compliance” to be assessed for by the European Commission.
Step 5: Take part in workshops and working groups. The AI Office continues to seek input from GPAI model providers, downstream providers, industry groups, civil society organizations, academia, and other independent experts to help create the Code of Practice.
Providers and deployers should remain part of the conversation as standards are being discussed and developed. Industry collaboration is a central component and will result in the application of the compliance standards that will make up the Code of Practice for GPAI models.
Staying abreast of regulatory updates and guidance over the next few years will also be important. Many U.K. regulators have issued strategic approaches to AI regulation. Companies should refer to these resources as further guidance to aid them in compliance with the AI Act.
C5 will be holding its “European Forum on AI Law, Safety & Governance” on Jan. 29-30, 2025, in Brussels. For more information, and to register, please visit: https://www.c5-online.com/eu-ai-law/