The EU AI Act: Open-Source Exceptions and Considerations for Your AI Strategy

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

This update is part of our EU AI Act Essentials Series. Click here to view additional updates.

The European AI Act emerged after years of deliberation among legislators crafting the first comprehensive law to regulate artificial intelligence systems[1] and general-purpose AI models (GPAIMs).[2]

Given the EU’s status as an early mover in tech regulation, AI developers and users should understand the requirements of and exceptions to the EU regulation – including open-source exceptions – as a guidepost for how such requirements are likely to develop elsewhere in the near term.

EU AI Act Requirements for AI Systems and General-Purpose AI Models

The AI Act seeks to regulate (and in some cases prohibit) the development, use and distribution of AI systems and general-purpose AI models, especially where such technologies impact life, safety or individual legal rights.

The AI Act reflects a concern that foundational AI technologies carry enhanced risk to society due to their widespread adaptability and the likelihood of their pervasive use throughout the technology ecosystem, meaning problems with powerful or defective AI systems or models could be quickly magnified in the economy as compared to models with only a narrow and tailored training process and use case.

The extent to which the AI Act should regulate open-source AI technologies was a source of controversy during the legislative process and final negotiations. Some argued that failure to include exemptions for models and systems made available to developers on an open-source basis would stifle knowledge-sharing and innovation. However, others raised the potential for security issues associated with unrestricted dissemination of open-source models.

In the end, the AI Act contains two exemptions.

Overview of the Open-Source Exceptions

  1. AI Systems
    • The first exception (in detailing the “Scope” of the AI Act in Article 2(12)) provides that the AI Act as a whole “does not apply to AI systems released under free and open-source licenses, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.” Those articles cover AI systems that are prohibited or interact directly with natural persons.
    • Whether a given AI system falls into one of the categories of prohibited or high-risk AI systems identified in Articles 5 and 6 (together with Annexes I and III) is determined by a fact-specific analysis.
  2. General-Purpose AI Models
    • The AI Act creates a limited open-source exception for general-purpose AI models. To qualify, providers must enable “the access, usage, modification, and distribution of the model…” where the model’s parameters “…including the weights, the information on the model architecture, and the information on model usage, are made publicly available.”
    • General-purpose AI models, though, will not qualify for the exception if they “present systemic risks” (described below).
    • Qualification: Two core areas of focus for providers seeking qualification of general-purpose AI models under the exception will be to:
      • Make the given GPAIM available on conditions that satisfy the requirements of Article 53(2) as stated above.
      • Account for systemic risk. The draft language describes general-purpose AI models as presenting “systemic risks” if they:
        • Have “high-impact capabilities” evaluated on the basis of technical tools and methodologies. (This will be presumed when the cumulative amount of computation used for training the model measured in floating point operations (FLOPs) is greater than 10^25) or:
        • Are otherwise designated as such by the Commission.
  • Benefits of the Exception: Provided that an open-source general-purpose AI model otherwise meets the exception requirements, the AI Act specifies that the provider is exempt from transparency obligations in Article 53(1)(a) and (b). That includes an exemption from the obligation to create and maintain up-to-date technical documentation and information intended for downstream providers that plan to integrate the GPAIM into their own AI systems.

    The provider of the open-source GPAIM nonetheless must:

    • Share, at distribution, a detailed summary of the content used for model training (with a level of specificity regulators will determine).
    • Establish a policy with respect to EU copyright law, including for identifying and respecting a rightsholders reservation of rights though “the use of state of the art technologies” and pursuant to Article 4(3) of Directive (EU) 2019/790.

Considerations for an Open-Source AI Strategy

Below are just a few initial considerations developers, providers and users (called “deployers” under the AI Act) should keep in mind as they continue to develop, use and distribute such technologies:

What types of AI technologies are you using or sharing and for what purposes?

  • The first step of any AI compliance program is an inventory of how AI technologies are used in your business. Confirm whether they are licensed-in to your entity or licensed out to third parties. This could include a combination of:
    • Proprietary AI systems and general-purpose AI models.
    • Open-source AI systems and general-purpose AI models.
    • Commercially licensed AI systems and general-purpose AI models.
    • Other commercially licensed services that include AI-powered functionality.
  • Companies frequently use multiple types of models or AI-powered tools for various internal purposes even if they do not always realize it due to the speed at which such technologies are integrated into commonly used software services (which might include open-source AI technologies). This means a one-size-fits-all approach could quickly become obsolete. Such growth makes internal tracking and oversight a cornerstone to managing AI-related risk.
  • Whether license terms are being complied with (including the license scope) is another important level of analysis. Especially where AI technologies are being licensed from third parties, compliance with applicable license terms is critical to maintaining a legal right to use such technology. Licensors can often immediately terminate licensee use in the event of licensee’s breach of applicable terms.

What are the pros and cons as a “provider” for licensing AI systems or GPAIMs to third parties under an open-source license that respects the requirements of the AI Act?

  • A provider might find it beneficial to make an AI system or general-purpose AI model available under an open-source license. That could bolster innovation and development, enhance its reputation, market the availability of its more advanced solutions or recruit technical personnel.
  • Also, the AI Act exempts GPAIM providers from a number of transparency obligations. This could lessen the compliance burden.
  • However, the AI Act still requires disclosures related to architecture, weights and training. That could implicate information a provider intends to maintain as confidential or a trade secret. To date, few GPAIMs meet the requirements of that open-source exception. That could limit the initial open-source GPAIM options available to users under the AI Act.

What are the pros and cons as a “deployer” (user) for licensing AI systems or GPAIMs from providers under the exceptions?

  • Truly open-source AI systems and GPAIMs are free, easily accessible and already developed, potentially saving companies time and money. Users benefit from the information GPAIM providers must disclose as part of the license terms, such as parameters, weights and architecture, which goes beyond what is often made available under a traditional open-source software license.
  • Among the potential drawbacks of open-source AI systems or GPAIMs, users may be wary of potential security vulnerabilities or copyleft license effects resulting from such offerings with an unclear testing regimen or pedigree. That’s especially true if users are not receiving representations, warranties or indemnities from a third party backing the training, suitability and security of such AI technologies. Users may also want a provider to be contractually obligated to:
    • Provide ongoing implementation support or maintenance.
    • Maintain a detailed understanding on how AI technologies were trained and operated in case regulators or customers raise questions.

What safeguards should I have in place if my company uses open-source AI technologies?

  • Beyond corporate oversight, companies should consider extensive operational precautions when evaluating, implementing and operating AI technologies.
  • Some of these precautions might already be in place for existing technical systems, including internal and external testing, quality management procedures, employee/contractor policies concerning AI, technical documentation of AI usage, log maintenance, maintaining a “human-in-the-loop” for AI operations and tracking and labeling output from AI models.
  • Additionally, while it might already be a part of a company’s open-source software policy or procedures, legal and technical personnel should evaluate the legal terms that apply to any new open-source AI technologies prior to internal or external use.
  • They should ensure there are not burdensome terms associated with such licenses or outright prohibitions that conflict with the business use case, such as commercial use restrictions.
  • A company should track such terms and periodically audit the use of associated code to ensure the latest company use cases remain consistent with license terms.

[1] “AI system” is defined as “a machine-based system designed to operate with varying levels of autonomy and that might exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

[2] “General-purpose AI models” are defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the EU market and that can be integrated into a variety of downstream systems or applications.”

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide