Dechert Cyber Bits - Issue 2

Dechert LLP
Contact

Dechert LLP

We are delighted to welcome you to the second issue of Dechert Cyber Bits, brought to you by members of our top-ranked, global Privacy & Cybersecurity practice. This second issue of Cyber Bits discusses key developments from around the globe and provides practical takeaways designed to reduce risk for your organization.

Our world-class global team covers all aspects of privacy and cybersecurity, including class action litigation defense, breach response/ransom negotiation, strategic privacy counseling, transactional diligence and the defense of regulatory actions. Our deep bench spans Dechert’s 22 global offices, and our partners are top-ranked thought leaders and pioneers in the space who bring decades of expertise and experience. We have litigated some of the earliest, landmark privacy litigation matters, handled over a thousand data breach investigations and defended hundreds of regulatory enforcement actions brought by U.S. and global regulators. Our lawyers routinely advise the world’s top companies on cutting-edge and sensitive matters of strategic importance. Many are high profile, but our best work often involves matters that no one ever hears about – such as the regulatory inquiry that quietly goes away, the creative strategic advice that solves a thorny, cross-border data transfer issue, or the use of innovative technology to leverage data holdings.

We hope you will find Dechert Cyber Bits useful and informative.


FTC Updates Safeguards Rule and Privacy Rule; Proposes Data Breach Reporting Requirement

Noting the sharp rise in data breaches and cyberattacks in recent years, on October 27, the Federal Trade Commission (“FTC”) announced new updates to the Gramm-Leach-Bliley Act’s (“GLBA”) Safeguards Rule. The changes to the Safeguards Rule include specific criteria for the safeguards which non-banking financial institutions (e.g., mortgage brokers, motor vehicle dealers) must implement, such as limiting access to and encrypting consumer financial information. Additionally, in-scope institutions must explain to consumers their information-sharing practices and designate a single person to oversee their security programs, while requiring the designated individual to provide periodic updates to the organizations’ boards of directors or officers in charge of information security.

The FTC also stated that it is seeking comment on whether to make an additional change to the Safeguards Rule that would require in-scope financial institutions to report to the FTC any security event that affects at least 1,000 consumers and involves customer information having been misused (or reasonably likely to have been misused). The FTC is issuing a supplemental notice of proposed rulemaking, which will be published in the Federal Register shortly. The public will have 60 days to comment after the notice is published in the Federal Register.

Additionally, the FTC updated the FTC Privacy Rule to align the scope of its authority under that rule with the requirements of the Dodd-Frank Act, and to incorporate amendments made to the GLBA via the FST Act, which permits financial institutions that meet certain conditions to forego delivering an annual privacy notice to consumers.

Takeaway: In-scope financial institutions are advised to review their current security policies and procedures to ensure they align with the updates set forth in the updated Safeguards Rule. Those financial institutions that may be affected by the proposed breach notification rule should consider submitting comments to the FTC.

US Senators Introduce Bill to Protect Against Misuse of AI Data

On October 21, Senators Gary Peters (D-Michigan), the Senate Homeland Security and Governmental Affairs Committee Chairman, and ranking member Rob Portman (R-Ohio) introduced the Government Ownership and Oversight of Data in Artificial Intelligence Act (“GOOD AI Act”), which seeks to secure data that federal contractors collect while using artificial intelligence (“AI”) technologies.

The GOOD AI Act would require the Director of the Office of Management and Budget to establish and consult with an Artificial Intelligence Hygiene Working Group (“Working Group”) made up of AI experts from across the federal government. The Working Group would be tasked with developing and implementing solutions to ensure that AI technologies and the data they collect are secure and, according to a summary provided by the Senate Homeland Security and Governmental Affairs Committee, make clear that the federal government is the “ultimate owner” of the collected information so as to prevent third parties from appropriating, selling, or misusing this information “in a way that compromises the privacy of Americans.”

Takeaway: The introduction of the GOOD AI Act makes clear that, as with other pending AI regulatory initiatives around the world, through this key initiative the U.S. government joins the global focus on regulating the use of AI technologies to ensure that such usage is secure and accounts for civil liberties.

US Agencies Provide Guidance on Responding to BlackMatter Ransomware Attacks

On October 18, the Cybersecurity and Infrastructure Security Agency (“CISA”), the Federal Bureau of Investigation (“FBI”), and the National Security Agency (“NSA”) (collectively, the “Agencies”) issued a joint Cybersecurity Advisory (the “Joint Advisory”): (i) reporting that the BlackMatter ransomware group has targeted multiple U.S. critical infrastructure organizations since July 2021, including those in the agricultural sector; and (ii) encouraging impacted organizations to report ransomware incidents to the Agencies immediately. BlackMatter ransomware-as-a-service activity is currently one of the top ransomware threats.

The Joint Advisory provides an overview of BlackMatter’s tactics and identifies mitigation measures to reduce the risk of BlackMatter ransomware. The recommendations include:

  • Implementing detection signatures published by the Agencies;
  • Using strong, unique passwords that are not re-used across accounts;
  • Placing multi-factor authentication on all accounts;
  • Timely installing security patches and updating systems;
  • Removing unnecessary access across the system and using a host-based firewall;
  • Implementing network segmentation and monitoring for movement across segments;
  • Using Admin disabling tools to support identity and privileged access management; and
  • Implementing backup and restoration policies and procedures.

The Joint Advisory also identifies additional technical measures that can be taken by critical infrastructure organizations in an effort to prevent the compromise of credentials.

Takeaway: After a slight pause following Colonial Pipeline, ransomware attacks continue to be a serious threat to critical infrastructure. In 2021, ransomware attacks have increased in both number and ransom amounts, reaching the tens of millions of dollars (in cryptocurrency of course). This Joint Advisory provides practical and actionable guidance to reduce the risk of a ransom attack. See Dechert’s article on reducing the risk of a ransom attack in the Harvard Business Review.

Hamburg DPA fines Vattenfall for Transparency Failures But Reduces Fine Given Company’s Cooperation

On September 24, one of Germany’s data protection authorities, the Hamburg Commissioner for Data Protection and Freedom of Information (“Hamburg Commissioner”), announced that it had fined Vattenfall Europe Sales GmbH (“Vattenfall”), a subsidiary of a Swedish-based power company, just over €900,000 for violating transparency obligations required under Articles 12 and 13 of the General Data Protection Regulation (“GDPR”). Vattenfall’s parent company is wholly owned by the Swedish government. The Hamburg Commissioner found that Vattenfall did not adequately disclose to customers that it was processing personal data relating to past customers’ invoices to analyze which of their existing customers tend to change service providers more often so that Vattenfall could offer these customers special contractual discounts (instead of offering such terms to all existing customers that would have been less profitable for the company). According to the Hamburg Commissioner, Vattenfall lawfully maintained these invoices to comply with retention periods under tax and commercial law.

The Hamburg Commissioner stated that around 500,000 data subjects had been affected and explained that its proposed fine was significantly reduced due to Vattenfall’s cooperation and prompt cessation of the undisclosed data processing activities. Further, the Hamburg Commissioner and Vattenfall agreed on a procedure to inform customers in a transparent and comprehensive manner about Vattenfall’s internal evaluation of the customers’ past behavior and its purpose for doing so, and to provide customers the choice to opt in to non-discounted contracts without such an evaluation.

Takeaway: Data protection authorities continue to focus on the GDPR’s transparency requirements and will take action against organizations that fail to provide adequate disclosures to data subjects about how their personal data will be transferred. Companies should consider whether their privacy notice disclosures accurately reflect their data processing activities. Further, given the Hamburg Commissioner’s decision to reduce its fine in light of Vattenfall’s cooperation and timely corrective action, it is worth bearing in mind that cooperation with data protection authorities can have a direct impact on the outcome of regulatory investigations, particularly with regard to fines.

FTC Report Finds ISPs Collect More PI Than Expected

According to a Federal Trade Commission (“FTC”) staff report (“Report”) made public on October 21, internet service providers (“ISP”) collect and share more personal information (“PI”) than consumers may expect, while providing limited choice and transparency regarding how this PI is used and shared. The Report notes that (in addition to internet services) many ISPs offer services that include voice, content, smart devices, advertising, and analytics, all of which increase ISPs’ access to consumer PI. Additionally, in its review of ISPs, the FTC found that that several ISPs combine consumer PI that they collect from across their product and service offerings to target ads, place consumers into sensitive categories (e.g., by race and sexual orientation), and share real-time geolocation data with third parties.

The Report also notes that despite privacy disclosures by ISPs promising not to sell PI, many ISPs failed to disclose that PI would nonetheless be used, transferred, and monetized by unaffiliated third parties. The Report notes that many ISPs claim to offer consumers choices about how their PI is used. The FTC found, however, that many ISPs often make it difficult for consumers to exercise these choices, even enticing them to share more PI on top of that already collected.

Takeaway: Given the current “pro-privacy” makeup of the FTC, risks of noncompliance with Section 5 of the FTC Act regarding false, deceptive, or misleading privacy policy disclosures regarding the collection, use, sharing, and retention of PI are enhanced. Companies should take note of statements made by the FTC and consider reviewing their data practices to align them with the Commission’s evolving expectations and concerns. Particular focus should be paid to disclosures that promise not to sell PI, and whether a company’s data sharing practices entail sharing with third parties for purposes unrelated to why a consumer may have provided the PI in the first place. There is a risk that such practices could be deemed unfair and misleading.

White House Science Advisors Call for an “AI Bill of Rights”

Advisors to President Biden in the White House Office of Science and Technology Policy (“OSTP”) announced that the OSTP is developing a “bill of rights” to govern artificial intelligence (“AI”). OSTP Director Eric Lander and Deputy Director Alondra Nelson discussed the project in a Wired magazine op-ed.

In their Wired article, Lander and Nelson outlined concerns that have been raised about the reliability of some AI programs and about the potential that flawed AI applications could lead to inaccurate or biased decision making. More specifically, among other potential problems, they referenced concerns about the use of data sets for training that are not representative of historically marginalized populations, about how such failures might result in bias and disproportionate impacts that affect marginalized groups, as well as concerns about data privacy and transparency regarding how particular AI systems and models function. They argued that “[p]owerful technologies should be required to respect our democratic values,” and that “[o]ur country should clarify the rights and freedoms we expect data-driven technologies to respect.” In particular, the article suggested that the following rights should be considered:

  • rights related to transparency regarding AI’s impact on civil liberties;
  • freedom from being subject to AI that is not accurate;
  • freedom from AI trained on data sets that are not sufficiently representative;
  • freedom from “pervasive or discriminatory surveillance and monitoring” whether at home or in the community; and
  • the right to “meaningful recourse” for harms caused by AI.

As part of its development process, OSTP has issued an RFI seeking a wide range of information on public and private uses of biometric technologies. It seeks to understand where such technologies are being used (and by whom), the current principles and policies that apply to their use, and the categories of stakeholders that may be affected by their use or regulation.

The RFI notes that there has been significant attention paid to law enforcement’s use of facial recognition technology, but that OSTP is seeking information more broadly on other AI-driven biometric technologies that involve “identification or inference of emotion, disposition, character, or intent.” The RFI states that these include: (i) facial recognition for access to resources (e.g., housing, schools, workplaces); and (ii) facial or voice analysis in employment, education, and advertising.

The deadline to submit comments in response to the RFI is January 15, 2022.

Takeaway: Regulation of AI continues to be a hot topic across sectors of the U.S. government. The Biden administration’s engagement with the issue and the OSTP’s indication that it will be working with experts in academia, public and private sectors, and within communities across the U.S. show that whether or not legislative initiatives in the U.S. Congress move forward, stakeholders can expect some future regulation of AI at the federal level. Organizations that develop, train, and use AI are well-advised to proactively engage with OSTP and submit comments to the RFI to educate regulators on their use of the relevant biometric AI-enabled technologies and, in particular, how they would be impacted by future regulation.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide