Dechert Cyber Bits - Issue 68

Dechert LLP

Articles in this Issue

  • DOJ Final Rule: New US Restrictions on Nearly All Foreign Access to Personal Data
  • HHS Office for Civil Rights Proposes Cybersecurity-focused Changes to HIPPA’s Security Rule
  • EDPB Opinion on Personal Data Use in AI Models
  • UK Court of Appeal Puts Brakes on Misuse of Private Information Class Actions
  • Connecticut AG Advises Consumers and Businesses of New Privacy Rights & Requirements Under the CTDPA
  • Oregon Department of Justice Releases New Guidance on Legal Requirements for Businesses Using AI
  • CPPA Reaches Settlement with Two Data Brokers
  • Dechert Tidbits

DOJ Final Rule: New US Restrictions on Nearly All Foreign Access to Personal Data

The National Security Division of the United States Department of Justice has issued a sweeping final rule that would prevent access to U.S. bulk "sensitive personal data" and government-related data by covered persons or countries of concern (the "Final Rule"). Pursuant to the Final Rule, and as soon as April 8, 2025, U.S. companies will be restricted, and in some cases prohibited, from engaging in transactions that involve the transfer of or access to certain data by China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia and Venezuela, and by covered persons with significant connections thereto.

With respect to "sensitive personal data," the Final Rule defines that term very broadly to include: (1) certain personal identifiers, such as names linked to device identifiers, social security numbers, driver’s licenses, or other government identification numbers; (2) precise geolocation data; (3) biometric identifiers; (4) human genomic data and certain other human "'omic data"; (5) personal health data (e.g., height, weight, vital signs, symptoms); and (6) personal financial data. Three categories of transactions involving bulk sensitive personal data are restricted: those involving vendors, employment, and passive investment agreements. The Final Rule contemplates U.S. companies being able to engage in those transactions if they meet certain cybersecurity and reporting requirements. Other transactions related to "bulk" sensitive personal data—those involving data brokerage and access to bulk human "'omic data" or human biospecimens from which such data can be derived—are prohibited under the Final Rule. The Final Rule also restricts the transfer of and access to U.S. government-related data.

Notably, the Final Rule contains a series of important exceptions, including for transactions involving some financial services activities and certain corporate group transactions. In addition, even though the Final Rule primarily focuses on direct transactions to persons associated with the six countries designated as "countries of concern" due to their long-term or serious instances of conduct adverse to United States national security and because they pose a significant risk of exploiting bulk sensitive personal and government-related data, it has implications for all foreign transactions.

Despite industry requests to delay the compliance date, the Final Rule goes into effect on April 8, 2025, and companies will be expected to comply with most requirements on that date.

Takeaway: Given the complexities of the Final Rule and the April 2025 compliance date, U.S. companies should waste no time in analyzing the Final Rule’s impact on their data flows and business models. It is unlikely that companies will be able to take a one-size-fits-all approach to compliance, as different data flows will constitute different "transactions" and raise unique compliance obligations. We expect to see companies move quickly to engage in a fact-intensive applicability analysis and determine the extent to which they can leverage the Final Rule’s sometimes narrowly drawn exemptions. In scope transactions and data flows will require companies to design and implement a comprehensive risk mitigation strategy in the near-term.

HHS Office for Civil Rights Proposes Cybersecurity-focused Changes to HIPPA’s Security Rule

On December 27, 2024, the U.S. Department of Health and Human Services Office for Civil Rights ("OCR") issued a Notice of Proposed Rulemaking that, if enacted, would modify the Health Insurance Portability and Accountability Act of 1996 ("HIPAA") Security Rule for the first time since 2013 (the "Proposed Rule"). The Proposed Rule seeks to bolster healthcare organizations’ ability to protect sensitive healthcare information from cybersecurity threats. The Proposed Rule’s comment period will close on March 7, 2025, which is 60 days following its publication in the Federal Register.

The Proposed Rule focuses on the protection of electronic protected health information ("ePHI"). It would make several significant changes to HIPAA’s Security Rule by clarifying existing requirements and adding new obligations. These include, but are not limited to: (1) removing the distinction between "required" and "addressable" implementation specifications; (2) adding specific compliance time periods for existing requirements; (3) increasing required levels of specificity in risk analysis; (4) requiring healthcare organizations to develop and maintain a written technology asset inventory and network map and other documentation of Security Rule policies, procedures, plans, and analyses; (5) requiring annual compliance audits; (6) mandating new security controls, including encryption of ePHI at rest and in transit and multi-factor authentication; and (7) requiring healthcare organizations to update new and existing business associate agreements to include information concerning the activation of business associates’ contingency plans.

Among other things, OCR estimates that "the enhanced security posture of regulated entities would likely reduce the number of breaches of ePHI and mitigate the effects of breaches that nonetheless occur," and estimates that if its proposed changes are enacted, "the revised Security Rule would pay for itself."

Takeaway: The proposed revisions to HIPAA’s Security Rule would require a tremendous amount of work from HIPAA-regulated entities if enacted. Of course, the future of the Proposed Rule remains uncertain, as its advancement ultimately rests in the hands of the Trump administration. That said, the first Trump administration placed a significant emphasis on cybersecurity, and it should not be counted out that the Proposed Rule could become law in some form. HIPAA-regulated entities will want to monitor the proposal and its trajectory closely and should consider contacting industry organizations to submit comments on the proposal in advance of the March 7th deadline.

EDPB Opinion on Personal Data Use in AI Models

The European Data Protection Board ("EDPB") released an opinion on the use of personal data for developing and deploying AI models. The opinion was issued in response to a request from Ireland's Data Protection Commission seeking clarity on how the GDPR applies to AI training.

The opinion addresses:

  1. When and how an AI system that is trained using personal data can be treated as only processing "anonymous" data. According to the opinion, AI developers should not treat the data as "anonymous" unless the likelihood that identifying information will be extracted or released is insignificant when taking into account "all the means reasonably likely to be used" by the controller or another person. The opinion provides examples of steps that can be taken by AI providers to demonstrate that the data is anonymous;
  2. The circumstances in which AI developers can rely on "legitimate interests" for their use of personal data in the training and deployment of AI models. The opinion highlights risks that AI models can pose to individuals’ fundamental rights that must be balanced against a controller’s legitimate interests, as well as providing examples of measures that can be taken to mitigate such risks;
  3. How the unlawful training of an AI model impacts the lawfulness of subsequent use of the AI model. This is particularly important given that data protection authorities can order remedial measures including deletion of the AI model itself. According to the EDPB, if the operation of the model does not involve processing personal data, the fact that it was trained unlawfully does not taint the subsequent use; however, in other circumstances where personal data is retained, unlawful training of the model may impact the lawfulness of subsequent use of the model.

Takeaway: The EDPB opinion provides guidance on important issues primarily targeted at businesses developing AI models. However, it is significant also for those using AI models. The opinion indicates that businesses using third-party AI models (which retain personal data) can be in breach of their own data protection obligations if they use an unlawfully trained AI model without conducting appropriate diligence. Businesses onboarding new AI models will need procedures that include appropriate data protection diligence as part of that onboarding. The UK data regulator also has been grappling with the data protection issues arising in relation to AI. Its guidance on generative AI was published in December 2024.

UK Court of Appeal Puts Brakes on Misuse of Private Information Class Actions

In a claim by an individual named Andrew Prismall, purporting to represent a class of approximately 1.6 million individuals, against Google and DeepMind, the UK Court of Appeal considered the requirements for opt-out class actions in the context of an alleged misuse of private information claim.

A group of London hospitals had shared medical data of approximately 1.6 million patients with Google and DeepMind to develop an app for the treatment of acute kidney injury. The UK data regulator found that these arrangements contravened UK data protection law. Mr. Prismall initially brought a class action for breach of data protection law. However, after the UK Supreme Court’s decision in Lloyd v. Google limited the availability of class actions for data protection claims, he withdrew the data protection claim and issued a new claim for "misuse of private information."

The Court of Appeal considered whether the claim could proceed as a "representative action," a type of opt-out class action. Mr. Prismall had to establish that the position of each member of the class of 1.6 million individuals had a realistic prospect of succeeding in their claim without an individualized assessment of their circumstances. The Court of Appeal therefore considered whether it could be said that the notional claimant within the class of claimants with the worst claim (the "lowest common denominator" claimant) had a realistic prospect of success.

The Court of Appeal held that the class of claimants included individuals who had made their medical treatment public. In such circumstances it could not be said, without considering the particular circumstances of an individual’s case, that every claimant had a reasonable expectation of privacy in relation to the data shared with Google and DeepMind. The class action therefore had to be dismissed.

Takeaway: The UK Court of Appeal underlined that a representative class claim for misuse of private information is "always going to be very difficult to bring." That the UK Court of Appeal recognized that the class could not be maintained for those potential class members who had made their information public, bodes well for those on the receiving end of these types of actions. It makes sense that the Court didn’t just rubber stamp the class allegations and looked carefully at the individual circumstances to ensure that a class could be maintained.

Connecticut AG Advises Consumers and Businesses of New Privacy Rights & Requirements Under the CTDPA

Just before the new year, Connecticut Attorney General William Tong reminded consumers and businesses of their new rights and responsibilities under the Connecticut Data Privacy Act ("CTDPA"). Most provisions in the CTDPA, Connecticut’s comprehensive consumer privacy law, went into effect on July 1, 2023, with a delayed compliance date for certain provisions of January 1, 2025.

Under the CTDPA, Connecticut consumers now have the right to configure their web browsers to send global opt out preference signals ("OOPS"), a universal notification to businesses that the consumer wishes to opt-out of targeted advertising and the sale of their personal data across all online activities. Covered businesses are required to comply with such signals so long as the OOPS is sent in a way that enables the business to verify the consumer is a Connecticut resident. Notably, a consumer’s OOPS signal overrides that consumer’s previous privacy choices, though covered businesses can contact customers whose OOPS conflicts with previously stated preferences and ask them to confirm their preferences. However, if the consumer indicates the OOPS signal controls, the businesses must comply with it.

Takeaway: Businesses subject to the CTDPA that have not already done so will need to work quickly to implement tools to recognize and honor the relevant opt-out preference signals in accordance with the newly phased-in CTDPA requirements.

Oregon Department of Justice Releases New Guidance on Legal Requirements for Businesses Using AI

On December 24, 2024, Oregon Attorney General Ellen Rosenblum and the Oregon Department of Justice issued advisory guidance to help companies understand how Oregon’s various laws affect their implementation and use of AI (the "Guidance").

The Guidance focuses on the implications of the state’s Unlawful Trade Practices Act, Consumer Privacy Act, Consumer Information Protection Act, and Equality Act play for companies’ use of AI platforms and large-language models. Among other reminders, the Guidance cautions that businesses may be liable for AI technology that: (1) misleads or makes misrepresentations about the business’s products and services; (2) uses consumer personal data without a clear and accessible privacy notice or after a consumer has exercised their right to opt-out of the use of AI models for profiling in certain impactful decisions; (3) fails to safeguard consumers’ personal information; or (4) utilizes discriminatory inputs or produces biased outcomes that harm consumers based on protected characteristics.

The Oregon DOJ took special care to note that the Guidance is not exhaustive and that companies may be subject to obligations under existing state laws outside those listed in it, or from bills passed during the 2025 Oregon legislative session.

Takeaway: Oregon businesses using AI tools should look to AG Rosenblum’s recent Guidance as an indicator of the state’s views on AI regulation. The Guidance also signals that the state’s enforcement arm is likely to keep a close eye on businesses that use AI. However, the Guidance remains a work in progress and is not exhaustive. Businesses that have recently developed or deployed, or are considering developing or deploying AI, should consult the full corpus of Oregon state law to ensure they comply with both existing laws and the legal standards that develop as technological advances continue.

CPPA Reaches Settlement with Two Data Brokers

On December 23, 2024, the California Privacy Protection Agency ("CPPA") announced settlements with two data brokers, PayDae, Inc. (doing business as "Infillion") and The Data Group, LLC, for allegedly failing to register and pay the annual fee mandated by California’s data deletion law (Senate Bill 362) known as the "Delete Act." The CPPA's Board approved these settlements during a closed session on December 19, 2024. These settlements come on the heels of the CPPA’s ramped up enforcement actions we have covered in previous editions, including an investigative sweep beginning last October and another pair of settlements last November.

New York-based data broker Infillion and the Florida-based data broker Data Group agreed to pay $54,200 and $46,600, respectively, for allegedly failing to register by the 2024 deadline, though both companies did register later in 2024. Both companies also agreed to injunctive terms as part of their settlements. These actions bring the total number of settled cases to four, highlighting the CPPA's commitment to enforcing compliance among data brokers.

Pursuant to the Delete Act, since 2024 data brokers operating in California are required to register, pay an annual fee, and disclose certain information about their practices by January 31 of the year following any year where they met the definition of a "data broker" under the CPPA. Additional requirements will go into effect in 2026 and 2028. The CPPA is authorized to impose fines of $200 per day for failing to register by the deadline.

Takeaway: With this year’s January 31, 2025, registration deadline rapidly approaching, the CPPA’s escalated enforcements serve as a reminder to qualifying data brokers to complete their annual registration on time. Companies subject to the Delete Act who operated as a data broker in 2024 should take measures to register on the CPPA’s website before the deadline and ensure compliance with their obligations under the Delete Act in order to avoid fines or other sanctions.

Dechert Tidbits

Ireland's DPC Fines Meta €251m Following 2018 Data Breach

The Irish Data Protection Commission concluded two inquiries into Meta following a data breach, which affected approximately 29 million Facebook accounts globally. The breach arose from the deployment of a video upload function in July 2017, which allowed unauthorized access to multiple user profiles and was exploited between 14 and 28 September 2018. The final decisions include administrative fines totaling €251 million.

UK Regulator Releases First Online Safety Act Codes of Practice

OfCom, the authority charged with enforcing the UK’s Online Safety Act, has published its first codes of practice and guidance in relation to the Online Safety Act. The Online Safety Act introduces new requirements for tech firms to tackle illegal online harms, such as terrorism, hate, fraud, child sexual abuse, and suicide encouragement. In-scope tech firms are required to assess and mitigate risks posed by illegal content from March 2025.

U.S. House Task Force Publishes Bipartisan AI Report

Last month, the Bipartisan House Task Force on Artificial Intelligence published a report with guiding principles, recommendations, and policy proposals designed to ensure America continues to lead the world in responsible AI development and innovation. The Task Force’s report offers insight into Congressional thinking about issues such as government use of AI, preemption of state AI laws via federal legislation, the role of certain industries in the AI race, data privacy, national security, and civil rights and civil liberties.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Dechert LLP

Written by:

Dechert LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide