Global AI Regulatory Update - February 2024

Eversheds Sutherland (US) LLP

[co-authors: Clare Johnston, Jon Botham]

Global

Guidelines for AI security

On November 27, 2023, AI Security Guidelines were published. The guidelines were led by UK National Cyber Security Centre and developed with the US Cybersecurity and Infrastructure Security Agency. They aim to raise the cybersecurity levels of AI and help ensure that it is designed, developed and deployed securely. The guidelines have been endorsed by a further 17 countries.

Impact: The guidelines are separated into four phases within the AI system development lifecycle. The aim is to help developers make sure that cybersecurity is baked in as an essential pre-condition of AI system safety and integral to the development process, from the start and throughout.

The guidelines cover secure design, including the trade-offs that need to be considered around system and model design, as well as development guidelines including supply chain security, documentation and asset and technical debt management.

Asia

Singapore: Framework to foster trusted GenAI development

On January 16, 2024, the AI Verify Foundation and Infocomm Media Development Authority announced their draft Model AI Governance Framework for Generative AI (Framework). A consultation is now open to feedback on the proposed Framework, which closes on 15 March 2024.

Impact: Aligned with Singapore’s National AI Strategy, the Framework aims to propose a balanced approach to addressing generative AI concerns while continuing to facilitate innovation. The Framework highlights the importance of global collaboration on policy approaches and emphasizes the need for policymakers to work with industry, researchers, and like-minded jurisdictions. While the Framework is helpful for organizations to understand policy implications, it is more of a discussion paper and does not recommend any practices for organizations to adopt.

Singapore: AI risk framework for the financial sector

The Monetary Authority (MAS) of Singapore announced the successful conclusion of phase one of Project MindForge, which seeks to develop a risk framework for the use of Generative AI for the financial sector. MAS also released an executive summary of a whitepaper detailing the risk framework. We expected the whitepaper to be published by January 2024 but this seems to have been delayed.

Impact: Project MindForge aims to develop a framework on the responsible use of GenAI in the financial industry, and to catalyse GenAI-powered innovation to solve common industry wide challenges. In phase one, they developed a GenAI risk framework. A platform-agnostic GenAI reference architecture was also developed, providing a list of the building blocks and components that organizations can use to create robust enterprise-level GenAI technology capabilities.

In the next phase, Project MindForge will expand to involve financial institutions from the insurance and asset management industries. They aim to conduct experiments to explore the use of GenAI in areas such as anti-money laundering, sustainability, and cyber security.

Europe

EU: AI Act signed and consolidated text published

On February 2, 2024, the Belgian Presidency of the EU announced that the Committee of Permanent Representatives has signed the EU AI Act. Also, the European Parliament Committee on Internal Market and Consumer Protection (IMCO) published the consolidated text of the AI Act.

Impact: The EU AI Act is in what we expect will be its final text. Before it becomes law, the text will be considered by MEPs in the European Parliament, which is scheduled for mid-April. Once adopted, provisions will enter into force progressively over the two years after (some prohibitions will enter force after six months, some provisions will enter force after 12 months, and remaining provisions will come into force after 24 months).

EU: EC vision on the development and use of AI

On January 24, 2024, the European Commission (EC) confirmed that it had adopted its own approach to AI. The EC’s strategic vision is to foster the development and use of lawful, safe, and trustworthy AI systems.

Impact: The EC highlighted that when using or deploying AI internally, they will:

  • develop internal operational guidelines for staff, providing clear and pragmatic guidance on how to put AI systems into operation
  • assess and classify AI systems that the Commission is using or planning to use
  • refrain from using AI systems that are considered incompatible with European values
  • put in place organizational structures to fulfil the obligations of the Commission in relation to AI

EU: ECJ Artificial Intelligence Strategy

On January 19, 2024, the Court of Justice of the EU (ECJ) published a report outlining its Artificial Intelligence Strategy. In the report, the ECJ covers topics including the definition and typology of AI, as well as the ECJ’s journey in exploring the possibilities afforded by AI.

Impact: The report outlines CJEU’s three goals, namely: to improve the efficiency and effectiveness of the administrative and judicial processes; enhance the quality and consistency of judicial decisions and increase access to justice and transparency for EU citizens.

For the next phase, the ECH have suggested they will:

  • adopt a governance structure that allows making smart choices in selecting the right AI tools for the right purpose, in a controlled way
  • create a mobility mechanism to shift resources to where they will make a difference
  • upskill the staff in all the areas
  • setup a change-management program to assist in implementing the change
  • design and adopt a correct IT architectural posture, with embedded security, data protection and ethics by design.

EU: provisional agreement on algorithm systems and AI use in the workplace

On December 13, 2023, the European Parliament and the Council of the European Union announced that they have reached a provisional agreement on the proposed directive to improve the working conditions for platform workers, released by the European Commission in December 2021.

Impact: The proposed directive aims to establish, among other things, the first EU-wide rules on the use of algorithm systems and AI in the workplace.

Following the provisional agreement, the proposed directive will need to be adopted by the Parliament and the Council. Once the proposed directive is published in the Official Journal of the EU, Member States would have two years to incorporate the provisions into their national legislation.

Middle East

UAE: Establishing the Artificial Intelligence and Advanced Technology Council

On 22 January 2024, President Sheikh Mohamed Bin Zayed Al Nahyan announced the creation of the Artificial Intelligence and Advanced Technology Council (AIATC).

Impact: AIATC will be responsible for developing policies and strategies related to research, infrastructure, and investments in AI and advanced technology in Abu Dhabi, being the UAE's capital.

UAE: Guidelines on LLMs and Gen AI in DIFC Court proceedings

On 21 December 2023, the Dubai International Financial Center (DIFC) issued new guidance on the use of ‘large language models’ (LLMs) and ‘generative content generators’ (GCGs) in DIFC Court proceedings (Guidance). The Guidance includes a series of best practices and principles that parties should consider, with particular focus on transparency over the use of AI-generated content in DIFC court proceedings.

Impact:The Guidance aims to create a balance by allowing parties to use AI-generated content in proceedings whilst providing safeguards from the risks AI technologies may raise. In terms of accuracy and reliability, the Guidance reflects that AI-generated content should be verified for accuracy and reliability. And early disclosure of the use of AI is highly recommended. The Guidance further notes that parties should not wait until shortly before trial, or the trial itself, to reveal that they intend to use AI-generated content as this may lead to requests for adjournments and the loss of trial dates.

UAE: Collaboration to advance AI regulatory compliance in Financial Services

In November, 2023, the Financial Services Regulatory Authority (FSRA) of Abu Dhabi Global Market and Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) signed a Memorandum of Understanding that focused on advancing the role of AI to better achieve regulatory outcomes within the financial services sector.

Impact: The partnership aims to develop regulatory and supervisory technology that enhances regulatory compliance and operational efficiency in the delivery of financial services.

UK

UK: LLM and Gen AI Report

On 2nd February 2024, the House of Lords’ Communications and Digital Committee released a report into large language models (LLM), such as ChatGPT. The Committee had launched an inquiry into LLMs to evaluate what actions should be taken within the next three years to ensure LLMs benefit people, the economy and society.

Impact: The report highlights that the government has ‘pivoted too far’ towards a narrow focus on high-stakes AI safety and a rebalance is needed and a greater focus on near-term risks. The report also points to market power and regulatory capture by vested interests as needing urgent attention and calls for the government and regulators to prioritize open competition and transparency.

The report sets out a number of key measures and recommendations including more support for AI start-ups, support for copyright holders (including measures such as way for rightsholders to check training data for copyright breaches), boosting computing infrastructure and exploring options for an ‘in-house’ sovereign UK large language model.

Generative AI framework for HM Government published

On January 18, 2024, the Cabinet Office and the Central Digital and Data Office published guidance on how civil servants and those working in government organizations should use generative AI safely and securely.

Impact: The generative AI framework for HM Government sets out ten principles to guide safe, responsible and effective use of generative AI in government organisations, including using the technology ethically and responsibly, knowing how to keep the tools secure, and ensuring there is meaningful human control where necessary. The guidance also outlines practical steps that should be taken when building generative AI solutions and that these are created in a way that takes into account legal considerations, ethics, data protection and privacy, security and governance.

ICO consultation on Gen AI and data protection

On 15 January 2024, the Information Commissioner’s Office (ICO) launched a consultation on generative AI models and how data protection should apply to its development. The consultation closes on 1 March 2024 and is the first in a series of consultation that will be launched by the ICO. This consultation outlines the ICO’s preliminary thinking on the interpretation of UK data protection laws in relation to generative AI models.

Impact: Developers need to consider the legality and compliance requirements prior to setting up generative AI processes. They also need to consider whether there is a valid use case for training the generative AI model using the online available data (the training of generative AI currently involves the use of large volumes of data, which can conflict with privacy requirements).

Even if developers conclude that they have a sufficient lawful basis and processing is necessary to achieve the use case, do the individuals’ rights and freedoms override the controllers’ interests? The ICO’s raises a key concern here in that such processing is ‘invisible’, making it more difficult for individuals to maintain control and understand what organizations are doing with their data.

Read more in our article.

US

US: Creation of Artificial Intelligence Safety Institute Consortium

On February 8 2024, the Secretary of Commerce announced the creation of the AI Safety Institute Consortium. Housed under the National Institute of Standards and Technology, the Consortium aims to unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy AI.

Impact: The Consortium’s aims include to:

  • establish a knowledge and data sharing space for AI stakeholders
  • prioritize research that provides a complete understanding of AI’s impacts on society
  • enable assessment of test systems and prototypes to inform future AI measurement efforts

US: FCC AI-generated voices subject to TCPA

On February 8 2024, the Federal Communications Commission (FCC) confirmed that existing Telephone Consumer Protection Act (TCPA) restrictions apply to AI-generated voices when they are used in robocalls. The FCC also confirmed that companies using AI in their telemarketing should carefully ensure their use complies with TCPA regulations.

Impact: The FCC is concerned that AI is quickly making it cheaper and easier for companies utilizing telemarketing in their sales to make robocalls using convincing human voices. While consumers can sometimes still identify AI-generated voices due to their tone or cadence, AI-generated voices are becoming increasingly difficult to distinguish from live human voices.

Read more in our article.

US: Artificial Intelligence Advancement Act of 2023

On February 1, 2024, US Senators introduced the Artificial Intelligence Environmental Impacts Act of 2024. The legislation would direct the National Institute of Standards and Technology (NIST) to develop standards to measure and report the full range of AI’s environmental impacts, as well as create a voluntary framework for AI developers to report environmental impacts.

Impact: Within two years of the law being enacted, the Environmental Protection Agency must conduct a study on the environmental impact of AI. Amongst other things, the law also provides for the establishment of a voluntary reporting system on the environmental impacts of AI amongst other things.

Bill on AI Foundation Model Transparency

On December 22 2023, US Representatives introduced the AI Foundational Model Transparency Act (Act). The reason for the Act is due to concerns that "widespread public use of foundation models has also led to countless instances where the public is being presented with inaccurate, imprecise, or biased information”.

Impact: The Act intends to:

  • direct the Federal Trade Commission (FTC) to set transparency standards for foundation model deployers by asking them to make certain information publicly available to consumers. Such standards with accompanying guidance to be published not more than 9 months after enactment of the Act
  • direct companies to provide consumers and the FTC with information on the model’s training data, model training mechanisms, and whether user data is collected in inference
  • protect small deployers and researchers, while seeking responsible transparency practices from our highest-impact foundation models

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Eversheds Sutherland (US) LLP | Attorney Advertising

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide