Dechert Cyber Bits - Key Developments in Privacy & Cybersecurity - Issue 46

[co-author: Connor Flannery]

Articles in this issue

  • EU AI Act: Political Agreement Reached on Terms of Landmark Legislation
  • Fines Cannot Be Issued Under GDPR Unless Violation “Intentional or Negligent”
  • CPPA Releases Proposal for Automated Decisionmaking Rules
  • UK Data Regulator Issues Warnings on Cookie Compliance
  • FTC Authorizes Compulsory Process for AI-related Products and Services

EU AI Act: Political Agreement Reached on Terms of Landmark Legislation

Negotiators for the European Council and the European Parliament have reached political agreement on the provisions of the EU Artificial Intelligence Act (the “AI Act”), marking a major step forward for the groundbreaking legislation. Efforts will now turn to the more technical task of finalizing the details of the AI Act. The AI Act is expected to be formally adopted in early 2024 and to come into effect in phases, with the first requirements taking effect after 6 months and the majority of requirements coming into force after 24 months.

The AI Act is EU-wide legislation that aims to ensure that AI systems used in the EU are safe and respect fundamental rights, as well as to stimulate investment and innovation in AI. It is likely to apply to businesses putting their AI systems on the market in the EU, EU businesses using AI, and non-EU businesses using AI outputs in the EU. The legislation takes a risk-based approach, with stricter rules for higher-risk uses of AI. If adopted, the maximum fines under the AI Act will be the higher of 7.5% of annual revenue or EUR 35 million.

Particular points of debate in the marathon negotiations were law enforcement uses of AI, regulating general-purpose AI and supporting innovation. Just weeks before the latest round of negotiations France, Germany, and Italy objected to the proposed regulation of general-purpose AI. To balance these concerns, negotiators agreed on an updated two-tier approach to regulate general-purpose AI. To address the risk that the AI Act could stifle innovation in the EU, the AI Act is set to contain exclusions for AI systems in the R&D and pre-training phases, as well as a regulatory sandbox. The EU also sees the certainty and “first-mover advantage” of the AI Act as competitive benefits.

Takeaway: The EU is on track to implement the first major global AI regulation and aims to provide a blueprint for AI governance internationally. The proposed legislation has extra-territorial scope and the potential for significant fines. Businesses developing or using AI should map their current usage of AI, consider whether they are within the territorial scope of the AI Act, and assess what risk-level category each AI deployment falls under. Such categorizations will be critical to determining the relevant obligations imposed by the AI Act. Assuming an early 2024 formal adoption, the majority of obligations are expected to apply from early 2026, but the AI Act’s prohibitions on the most harmful types of AI usage will apply from mid-late 2024 and certain transparency obligations are due to apply after 12 months, in early 2025.

 

 

Fines Cannot Be Issued Under GDPR Unless Violation “Intentional or Negligent”

In two decisions issued on December 5, 2023 in response to references from national courts in Lithuania and Germany, the Court of Justice of the EU (“CJEU”) ruled on issues relating to culpability under the GDPR and the conditions for imposing fines. Most remarkable is the CJEU's finding that a fine can only be imposed under the GDPR if the violation was committed either intentionally or negligently.

The CJEU held that:

  1. a data controller can only be fined for a GDPR violation if it was committed intentionally or negligently;
  2. fines based on revenue should take into account not only the revenue of the legal entity that committed the violation, but also the revenue of any other entities that form part of the same “economic unit” (group-wide turnover will therefore commonly form the basis for calculating fines);
  3. senior management do not need to be involved in, or aware of, a violation for an organization to be liable – the actions of anyone acting on behalf of the organization can give rise to liability;
  4. the laws of individual Member States cannot impose additional conditions for the imposition of fines.

Takeaway: The large number of references coming to the CJEU demonstrate the challenges national courts are facing in interpreting and applying the GDPR. Many of the CJEU’s conclusions are unsurprising and confirmatory. The requirement that breaches are “intentional or negligent” for a fine to be imposed is a less straightforward interpretation of the legislation and may give organizations under investigation additional arguments to avoid a fine. In practice, however, the requirement of “intent” or “negligence” is unlikely to have a material impact as data regulators have to date been required to be proportionate when deciding whether to issue fines. Material fines, as a result, were already unlikely in cases where no wrongdoing in the form of intention or negligence was involved.

 

CPPA Releases Proposal for Automated Decisionmaking Rules

On

November 27, 2023, the California Privacy Protection Agency (“CPPA”) released proposed California Consumer Privacy Act (“CCPA”) regulations on automated decisionmaking technologies (“ADMT”). The proposal defines ADMT as “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” The CPPA’s proposed regulations target a broad swath of technologies and businesses, particularly businesses that use ADMT to:

  • Make decisions that have significant impacts on consumers’ lives, such as, decisions about employment or compensation;
  • Profile employees, contractors, applicants, or students, including, for example, performance analytics and location tracking activity;
  • Profile consumers in publicly accessible places via facial-recognition technology or other emotion assessment tools; or
  • Profile consumers for behavioral advertising.

The proposed regulations include three major protections. First, a business that uses ADMT in the above ways would be required to provide “Pre-use Notices” to inform consumers about a business’ intended use of ADMT. Second, a business must allow consumers to opt-out of a business’ use of ADMT. The right to opt-out is limited by several exceptions: a business is not required to allow consumers to opt-out if ADMT is used to prevent security incidents, prevent fraud, protect life and safety, or provide a good or service the consumer requested. Third, a business using ADMT must afford consumers access to information about how the business used ADMT to make a decision about the consumer.

While the formal rulemaking process has not yet begun, the proposed regulations were discussed at the CPPA board meeting on December 8, 2023. The board released an addendum that would solidify a consumer’s right to opt-out of a business’ ADMT use and prohibits businesses from using ADMT to profile consumers actually known to be under 16 years old without consent. The CPPA expects to begin its formal rulemaking process in 2024.

Takeaway: This push to regulate AI-driven technologies comes as an increasing number of businesses implement tools to keep track of customers, employees, and other key metrics. Businesses that currently use, or plan to use, ADMT should keep tabs on the evolution of the CPPA’s proposal. If enacted, the CPPA’s proposed regulations could impose first-in-the-nation guardrails on how AI-driven technologies are used and have the potential to influence how tech companies develop AI.

 

UK Data Regulator Issues Warnings on Cookie Compliance

Earlier this year, the UK Information Commissioner’s Office (“ICO”) issued guidance stating that organizations must make it as easy for users to "Reject All" advertising cookies as it is to "Accept All." The ICO has now written to the organizations running some of the UK's most visited websites threatening enforcement action if they fail to bring their use of cookies into compliance with UK data protection laws within 30 days.

It was only a few years ago that the ICO’s website expressly indicated a light-touch approach to cookies enforcement, highlighting low levels of public concern around cookies. The ICO has re-evaluated the risks of cookies over recent years, reflecting increased public concern. Announcing its latest action, the ICO’s Executive Director of Regulatory Risk emphasized the serious risks of targeted advertising: “Gambling addicts may be targeted with betting offers based on their browsing record, women may be targeted with distressing baby adverts shortly after miscarriage and someone exploring their sexuality may be presented with ads that disclose their sexual orientation.”

Takeaway:Compliance with UK legal requirements in relation to cookies is an area of renewed focus for the ICO. The ICO is likely to be critical of cookie tools that are designed to make it difficult to reject cookies. Elsewhere in Europe, cookies are also in the crosshairs. In early 2023, the European Data Protection Board adopted a report of its Cookie Banner Taskforce, which similarly warned against website design techniques used to influence user choices on cookies (see Cyber Bits Issue 27). Website operators should review their approach to cookies and, if needed, consider making changes in light of increasing regulatory action and public concern.

 

FTC Authorizes Compulsory Process for AI-related Products and Services

On November 21, 2023, the U.S. Federal Trade Commission (“FTC”) approved a resolution expanding its investigative authority over products and services that use, claim to be produced by, or claim to detect use of, artificial intelligence. The resolution authorizes FTC staff to use “any and all compulsory processes available” in nonpublic investigations. The FTC’s most-used tool—the civil investigative demand (“CID”) (a form of compulsory process similar to a subpoena)—enables the FTC to make a legally enforceable demand for a wide range of information, documents, and testimony from businesses.

Takeaway: This resolution, among other FTC actions, illustrates the FTC’s heightened regulatory interest in AI and signals what is sure to be an increase in CIDs in the AI space. Given the FTC’s emphasis on swift and comprehensive access to information in AI-related investigations, businesses offering AI-related products and services may want to consider proactively preparing internal records concerning marketing claims, product development practices, and third-party oversight, to more easily respond completely and quickly should regulatory scrutiny arise.

In a press release announcing approval of the resolution, the FTC emphasized that the resolution will “streamline” the FTC staff’s ability to issue CIDs in investigations relating to AI, while retaining the FTC’s authority to determine when CIDs are issued. The resolution will be in effect for the next ten years.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide