Dechert Cyber Bits - Issue 33

Dechert LLP
Contact

Dechert LLP

Articles in this issue

  • Montana and Tennessee on Verge of Joining Other States in Passing Comprehensive Privacy Laws
  • EU Responds to Cyber Threats with Creation of Joint Cyber Shield
  • Four U.S. Federal Agencies Issue Joint Statement Signaling Intent to Police Fraudulent and Discriminatory Uses of AI
  • Kochava Sidesteps (For Now) The FTC’s Allegation That Its Sale of Geolocation Data Violated Section 5 of the FTC Act

Montana and Tennessee on Verge of Joining Other States in Passing Comprehensive Privacy Laws

On April 21, 2023, the state legislatures of Montana and Tennessee passed comprehensive privacy legislation bills. To become law, the bills must be signed by those states’ governors. If enacted (as we expect they will be), Montana’s privacy law would take effect on October 1, 2024, and Tennessee’s on July 1, 2025.

The Montana Consumer Data Privacy Act, modeled after the Connecticut Data Privacy Act (which takes effect July 1, 2023), applies to companies doing business in Montana or that target products or services to Montana, and that either: (a) control or process personal data of at least 50,000 Montana consumers; or (b) control or process personal data of at least 25,000 Montana consumers and derive over 25% of gross revenue from selling that data. Montana’s Attorney General has the exclusive right to enforce the law, which expressly bars a private right of action. The law does not apply to personal information already covered by various federal statutes, including Gramm-Leach-Bliley and HIPAA.

The Tennessee Information Privacy Act, modeled after the Utah Consumer Privacy Act (which takes effect December 31, 2023), is structurally like the Montana bill but, notably, only applies to companies that have revenues exceeding $25 million. The bill applies to companies doing business in Tennessee or that target products or services to Tennessee consumers, and control or process personal information of: (a) at least 175,000 Tennessee consumers; or (b) at least 25,000 Tennessee consumers and derive more than 50% of gross revenue from selling that data. As in the Montana bill, Tennessee’s attorney general has the exclusive right to enforce the law, and the law expressly bars a private right of action. The Act would also exclude personal information covered by various federal laws, again including Gramm-Leach-Bliley and HIPAA.

At a high level, both the Tennessee and Montana laws will provide consumers with the now-familiar rights of access, deletion, correction, and portability to respective state consumers. In addition, both proposed laws will also grant respective state consumers the right to opt out of the sale of personal information and the processing of personal information for targeted advertising and profiling purposes. Further, both laws will require covered entities to conduct data protection assessments for a covered entity’s processing that could present risks to consumers.

Both proposed laws include bespoke requirements that further complicate compliance efforts for companies. For example, the Tennessee law will require covered entities to “create, maintain, and comply with a written privacy program that reasonably conforms to the [NIST] privacy framework entitled ‘A Tool for Improving Privacy through Enterprise Risk Management Version 1.0’.” Benchmarking to NIST is a requirement that has yet to be introduced in any other state privacy law to date. The Montana law, like other privacy laws, will require covered entities to comply with opt-out preference signals.

Takeaway: Montana and Tennessee will likely become the eighth and ninth states to enact comprehensive privacy laws. Companies operating in, or targeting consumers in, Montana and Tennessee should consider taking steps now to closely review and update their data practices, compliance programs, and website privacy policies before the new laws take effect. Given the decades-long wait for a comprehensive federal privacy law, expect to continue to see a state-by-state effort to up the requirements.

EU Responds to Cyber Threats with Creation of Joint Cyber Shield

On April 18, 2023, the European Commission proposed the EU Cyber Solidarity Act. The aim of this proposed Act is to enhance the EU's capacity to detect, prepare for, and respond to significant or large-scale cybersecurity incidents. The proposal has two main components: the European Cybersecurity Shield and the Cybersecurity Emergency Mechanism.

The European Cybersecurity Shield will comprise Security Operations Centres (“SOCs”) located across the EU using advanced technologies, such as Artificial Intelligence and data analytics, to detect and share warnings of cybersecurity threats with cross-border authorities.

The Cyber Emergency Mechanism is intended to ensure that preparedness and responses to cybersecurity incidents are improved. This would be achieved by, for example, testing for potential vulnerabilities to attack, particularly in critical sectors such as healthcare, transport and energy, and creating an EU Cybersecurity Reserve to provide mutual assistance to a Member State affected by a cybersecurity incident.

The proposed legislation would also establish the Cybersecurity Incident Review Mechanism to assess and review specific cybersecurity incidents. At the request of the Commission or national authorities, the EU Cybersecurity Agency (“ENISA”) would be responsible for reviewing specific significant or large-scale cybersecurity incidents and delivering a report on lessons learned and, where appropriate, recommendations to improve the Union’s cyber response.

Takeaway: This draft legislation has a complex path to navigate before it passes into law, not least the nations of the EU having to buy in. The size and severity of attacks has greatly increased over the last couple of years, and the ability of nations to deal with such threats on their own is open to question. The pooling of resources may well lead to greater resilience in infrastructure and protection for SMEs and medium sized businesses.

Four U.S. Federal Agencies Issue Joint Statement Signaling Intent to Police Fraudulent and Discriminatory Uses of AI

On April 25, 2023, the U.S. Consumer Financial Protection Bureau (“CFPB”), Department of Justice’s Civil Rights Division (“DOJ”), the Equal Employment Opportunity Commission (“EEOC”), and the Federal Trade Commission (“FTC”) (together, “the Agencies”) released a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. In the Joint Statement the Agencies focus on risks of (1) bias or discrimination and (2) deceptive or fraudulent practices, and commit to “vigorously enforce their collective authorities and to monitor the development and use of automated systems.”

The Joint Statement contends that automated systems have a high potential for discrimination. The issues identified in the Joint Statement include concerns:

  • that data sets used to train AI systems may consist of “unrepresentative or imbalanced datasets,” or may include “historical bias,” or other systemic errors that could lead to data correlation with discriminatory results;
  • about “model opacity and access,” which means that internal processes of automated systems may not be transparent, and this opacity can make them hard to review for fairness; or that
  • users can use AI models in discriminatory ways.

With respect to fraud and deceptive practices, the FTC and other regulators have previously expressed concern about companies making false claims about their AI technology. Agencies also are concerned about bad actors using AI tools (such as deepfakes or chatbots) to carry out scams. According to the joint statement: “[w]e already see how AI tools can turbocharge fraud.” Although these are evolving challenges, the Agencies highlighted that existing legal provisions already apply to these areas of concern, and that individual agencies in their respective areas of responsibility are already taking action against violations of the law by automated services.

Takeaway: The Joint Statement itself has no direct legal effect, but it does signal federal regulators’ intentions to monitor and investigate the use of AI and the potential for enforcement actions against companies they believe are using AI in deceptive, biased, or discriminatory ways. Although regulation of automated services is still in its beginning stages, companies may want to carefully consider whether their use of AI is attuned to the risk areas identified in the Joint Statement and that any marketing thereof is true and accurate.

Kochava Sidesteps (For Now) The FTC’s Allegation That Its Sale of Geolocation Data Violated Section 5 of the FTC Act

Idaho’s federal district court dismissed a claim brought by the Federal Trade Commission (“FTC”) against data broker Kochava, Inc (“Kochava”). In the wake of Dobbs v. Jackson Women’s Health Organization, the FTC alleged that Kochava had engaged in an “unfair” practice within the meaning of Section 5 of the FTC Act. Geolocation data sold by Kochava, the FTC alleged, could allow third parties to track the movements of individual cell phone users to and from sensitive locations, such as places of worship or doctors’ offices. As a result of these sales, the FTC alleged that consumers were subject to “stigma, discrimination, physical violence, and emotional distress.”

Kochava moved to dismiss the FTC’s claim arguing, among other theories, that the FTC failed to allege sufficient facts to establish that Kochava’s practices either caused or were likely to cause “substantial injury to consumers,” as required to establish that a practice is “unfair” under the FTC Act. Before the FTC’s suit was filed, Kochava also filed a declaratory judgment action against the FTC, seeking among other declarations a ruling that its data collection and sales practices did not violate the law.

The court granted Kochava’s motion to dismiss, holding that the FTC failed to allege the “substantial injury” necessary to sustain the action. It concluded that the FTC’s allegations were legally insufficient in part because they were not enough to show that Kochava’s practices either had caused, or that they were likely to cause what the court characterized as the “secondary harms” (e.g. stigma, discrimination) alleged in the complaint. In the court’s view, while the FTC’s theory regarding the potential for these kinds of injuries from Kochava’s sales was plausible, its complaint had done “no more than claim that such secondary harms are theoretically possible,” and was essentially asking the court “to simply infer that consumer injury is probable from its assertion that Kochava is disclosing ‘sensitive information’ about device users.”

The court also rejected a second potential basis for establishing the requisite injury, namely the disclosure of users’ “sensitive” data to third parties as invasions of user privacy. The court acknowledged that conduct resulting in invasions of privacy were recognized as harmful under numerous legal provisions, and that a practice that violated these kinds of standards and caused injury to a large enough number of people could support a finding that the FTC Act’s unfairness provision had been violated. However, the court concluded that the Commission’s allegations failed to demonstrate how Kochava’s practices contributed to the level of “injury” to cell phone users’ privacy interests needed to meet the Act’s requirements.

The Court granted the FTC leave to amend its complaint within thirty days, although it expressed skepticism about whether the claim’s deficiencies could be cured by amendment. It also rejected Kochava’s other objections to the FTC’s complaint and, in a separate opinion, dismissed Kochava’s declaratory judgment action against the FTC with prejudice.

Takeaway: The court’s careful opinion in Kochava provides a valuable lesson on the burden the government must meet under the FTC Act to make the case that a particular business practice is “unfair.” Further, the FTC’s decision to bring this suit, especially when viewed together with other recent policy initiatives, demonstrates the Commission’s continued interest in exploring the boundaries of its authority to challenge data collection practices and uses that it considers potentially harmful to citizens’ privacy interests. With this decision, the Court has provided an important “check” on the FTC’s authority. These initiatives should be followed closely.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide