Digital Transformation Newsletter | December 2023: Monthly notes on digital technology and law

Hogan Lovells
Contact

Hogan LovellsMonthly notes on digital technology and law

November has been another intense month in terms of digital technology regulation - with the Bletchley Declaration concluding the UK-hosted AI Safety Summit; the OECD's proposal for a universally applicable definition of AI; the first steps to put President Biden's Executive Order in motion; the adoption of a final text of the EU Data Act, and a political agreement on the forthcoming EU Cyber Resilience Act. Yet none of these events have been as captivating as the twelfth hour discord on how to deal with foundation models in the context of AI regulation in Europe.

Until October, it looked as if the EU legislator was on track to quickly wrap up the work on its flagship project, the conclusion of the AI Act, addressing the risks deriving from artificial intelligence. Regarding one of the most controversial points, the regulation of so called foundation models (general purpose AI that can perform a wide range of different tasks - GPT, Dall-e, Bard, etc.), the Spanish Presidency of the EU Council had come forward with a proposal that looked like a viable compromise to reconcile the diverging positions, suggesting different levels of safety obligations for different types of foundation models as follows:

  • Providers of all foundation models would need to conform with certain transparency obligations, providing information about the model architecture, the pre-training process and a number of benchmark indicators. All models should also provide reassurance to adhere to EU copyright law, allowing for opting-out of certain data scraping methods and thereby opting-out of any onward use to pre-train such foundation models.
  • Additional obligations would apply to particularly powerful foundation models (to be evaluated by the model’s need for computing power or amount of data for its pre-training). These “very capable foundation models” would need to be actively vetted in regular intervals by independent auditors to adhere to certain more stringent requirements.

Yet, in mid-November, the governments of France, Germany and Italy handed in a position paper requesting to scrap the onerous obligations for the most powerful foundation models, and to replace them by some sort of mandatory self-regulation through codes of conduct – seeking to significantly ease the regulatory burden for such foundation models. This move was apparently motivated by an urge to support the development of foundation models from within the EU. Whatever the purpose, the response by members of the EU Parliament and an array of different stakeholders could not have been more crisp.

In addition to rejecting the lenient approach proposed by Italy’s, France’s and Germany’s government, members of the EU Parliament insisted on their position taken earlier, which was to impose an array of significant additional obligations on all foundation models. This included guarantees for human rights protection, implementation of risk mitigation strategies and similar onerous burdens.

The ensuing discord left the AI Act in full suspense, until the showdown came in the form of the fifth and final negotiations that have taken place over the last few days, and is expected to be wrapped up today. As things stand, an agreement has been struck on the controversial point of foundation models. As proposed by the Spanish Presidency, there will be an array of obligations for all foundations models, roughly in line with what was on the table before (see bullet point above). For particularly powerful foundation models, reports about their systemic risks, cybersecurity and environmental standards will need to be issued. They also require codes of practice, but only until regulatory standards for their use and deployment are in place.


In a less controversial fashion, in early November, the European Parliament adopted the final text of the EU Data Act, which is now on track to come into force in early 2024. The Data Act is arguably the most important piece of European legislation for the creation of a data economy and digital transformation overall. Among other things, it creates a right to access and utilize data generated by digital devices (such as health data from a medical device or production data from an industrial automation tool). It intends to pave the way towards the full commercialization of data, balancing other relevant legislation, such as data protection and trade secret laws. It is hoped that the Data Act will lay the foundation for new businesses built on the use and transfer of data of any kind. Whether it will also serve the onward use of data in the context of fully automated smart contracts, is something that still needs to be seen. For a comprehensive overview of this important piece of legislation see our HL Engage publication, authored by Sara-Lena Kreutzmann, Martin Pflüger and Jasper Siems.

In a parallel development, EU lawmakers agreed on the final text of a Cyber Resilience Act, which is now likely to be adopted and enter into force in the fall of 2024. The Cyber Resilience Act mainly deals with the security of communication between devices and is therefore a major contribution to the functionality of the Internet of Things. It introduces comprehensive obligations for manufacturers, importers and distributors of all products with digital elements that may be used in conjunction with other products. More information about the Cyber Resilience Act has been published on HL Engage by Christian Tinnefeld, Henrik Hanssen, Michael Thiesen and Joke Bodewits.

Meanwhile, at the beginning of November at the UK's Bletchley Park, the once top-secret home of Allied code-breaking during the Second World War, 29 countries from around the world, including China and the U.S., as well as the EU, agreed on a joint declaration, addressing the risks and opportunities deriving from artificial intelligence. One of the most relevant points of this declaration is the agreement to build up, over time, a “State of the Science” on “frontier AI”. This is in an attempt to seek and eventually find common ground on where to focus political attention and regulatory oversight for the fast-paced advancements of digital technology. More generally, the Bletchley Declaration rightfully stresses the need for sustained international collaboration on a substantive, and in particular, scientific level, with regular updates. For more information on this important UK-led initiative, see Dan Whitehead’s HL Engage publication, as well as further discussions from Eduardo Ustaran, Imogen Ireland, Telha Arshad, Lavan Thasarathakumar and Louise Crawford.

In the U.S., there has been a flurry of activities following President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of AI (as announced in October – see last month’s note on this topic), including:

  • a notice by the Department of Justice on information-sharing requirements for developers of certain foundation models;
  • a draft framework by the National Institute of Standards and Technology on AI safety and security testing;
  • the Department of Homeland Security’s preparations for a board of experts to advise on AI safety issues.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide