Dechert Cyber Bits - Issue 22

Dechert LLP

New EU-U.S. Data Privacy Framework for Data Transfers

Last week, the Biden administration issued an Executive Order concluding long-awaited negotiations with the EU to replace the EU-U.S. Privacy Shield framework. For practical insights, a discussion of next steps, and what it means for companies, you can access our recent OnPoint by Karen Neuman and Madeleine White, “The EU-U.S. Data Privacy Framework: Hold the Champagne.

SEC Chair Gensler Indicates Commission is Looking to Update SEC’s Regulation S-P

On September 28, 2022, Securities and Exchange Commission (“SEC” or the “Commission”) Chairman Gary Gensler appeared via video at the Investment Adviser Association’s Policy and Leadership Forum and indicated that the Commission is “looking to update” the SEC’s Regulation S-P (“Reg. S-P”). Chairman Gensler characterized updates to Reg. S-P as a companion initiative to the SEC’s proposed cybersecurity rule.

By way of background, Title V of the Gramm-Leach-Bliley Act of 1999 (“GLBA”) established privacy notice delivery requirements for financial institutions and created obligations regarding the safeguarding of nonpublic personal information. Pursuant to authority granted under the GLBA, the SEC adopted Reg. S-P. Reg. S-P applies to SEC registered investment advisers, broker-dealers and registered investment companies.

Chairman Gensler did not give a timeframe for the potential proposed updates to Reg. S-P. If changes are adopted, they would be the first major update to Reg. S-P in over 20 years.

Takeaway: The SEC’s proposed new and amended rules regarding cybersecurity risk management, released in February of this year, would be the most significant update to federal privacy law as applied to registered funds and investment advisers since the GLBA. Chairman Gensler’s recent comments indicate that the SEC may not be finished with new rules regarding cybersecurity for the asset management industry. We expect this will continue to be an area of focus for the SEC and other federal regulators.

Industry Groups Seek More Time to Comment on FTC’s Rulemaking Plans for Commercial Surveillance
On September 24, 2022, a coalition of over a dozen business and advertising groups, including the U.S. Chamber of Commerce and the Association of National Advertisers, asked the FTC for more time to weigh in on the Agency's request for comments regarding its proposed rulemaking on commercial surveillance and lax data security practices. In a public comment to the FTC, the industry groups requested a 60-day extension of the comment deadline on the Advance Notice of Proposed Rulemaking for Trade Regulation Rule on Commercial Surveillance and Data Security (“ANPR”), from October 21, 2022, to December 20, 2022.

The coalition’s comments were submitted following the FTC’s release of the ANPR, and a public forum sponsored by the Commission held on September 8, 2022, where issues related to the proposed rulemaking were discussed.

The ANPR invited public comment on the FTC’s intention to use its Section 18 rulemaking powers to address data security practices. The coalition’s letter to the FTC emphasized that the ANPR seeks information, research and experiential data about almost all aspects of the data-driven economy, through a procedure the FTC has rarely used. The letter argues that the additional time sought would permit organizations to provide the kind of detailed comments and information the FTC is requesting, including information on the economic impacts of potential regulation, alternatives available to the FTC and the significant downstream impacts of the proposed regulations on consumers, businesses and the American economy.

Takeaway: The FTC continues to struggle with crafting an effective national framework for addressing long-standing market and data privacy concerns raised by the collection and use – including downstream and secondary uses – of consumers’ personal information. There is widespread support for some kind of federal action, with industry pushing for comprehensive federal privacy legislation and consumer advocates pushing for the FTC to step in absent federal legislation. There continues to be disagreement on the details. How and when these differences can be resolved remains challenging. Meanwhile, companies are faced with a growing patchwork of state consumer privacy laws with nuanced differences in compliance obligations that can be quicksand for innovation and data driven business processes. Newly enacted laws will enter into force in 2023 with rulemaking proceedings well under way in California and Colorado.

UK Data Regulator Reprimands Seven Organizations for Information Access Request Failures

The UK Information Commissioner’s Office (“ICO”) announced on September 28, 2022, that it has taken action against seven organizations for failure to efficiently answer information access requests.

Under Article 15 of the UK GDPR, individuals are entitled to receive a copy of their personal data from any controller. These requests are referred to as Subject Access Requests (“SAR”). Controllers are required to answer a SAR within one month of receipt. This period can be extended by two months if (i) the request is complex or (ii) the individual has submitted multiple requests.

After investigation, the ICO concluded that six public organizations (Ministry of Defence, Home Office, London Borough of Croydon, Kent Police, London Borough of Hackney, and London Borough of Lambeth) and one private company (Virgin Media) failed to answer SARs within the requested time frame. The ICO issued public reprimands to all organizations.

The ICO’s action was precipitated by a series of complaints in relation to the relevant organizations. Over a six-month period in 2021, Virgin Media had failed to reply to SARs within the time limit in 14% of cases. The ICO received 125 complaints against Virgin Media in 2021.

Takeaway: These reprimands signal the ICO’s willingness to act to enforce individuals’ right of access, in particular where organizations have failed to properly respond to SARs on multiple occasions. SARs are often made when the relationship between the data subject and the controller is already strained, such as in the context of employment disputes or customer complaints. Already dissatisfied data subjects may need little encouragement to make a formal complaint to the regulator. Companies subject to the UK GDPR (and/or the EU GDPR) should ensure that their internal processes are set up to handle SARs in an efficient and timely manner. Whilst regulators may be forgiving of minor and isolated delays, companies receiving SARs on a regular basis should be particularly alert to the risk.

Former Uber Security Officer Found Guilty of Covering Up Data Breach

On October 5, 2022, a California federal jury found Uber’s former chief security officer, Joseph Sullivan, guilty of criminal obstruction and deliberate concealment of a felony for failing to report a data breach to authorities. Sullivan’s trial is believed to be the first time an executive has faced criminal charges over a cybersecurity breach.

Sullivan’s case involved a data breach of Uber’s systems that occurred in 2016 in which hackers stole 57 million Uber users’ records and 600,000 drivers’ license numbers. Sullivan arranged for the hackers to be paid $100,000 in exchange for promising to delete the stolen data and signing non-disclosure agreements. The payment was disguised as a “bug bounty,” a reward paid to cybersecurity researchers for discovering cybersecurity vulnerabilities, and the non-disclosure agreements falsely stated that the hackers had not stolen data. The hackers were later identified and pled guilty for the data breach.

At the time of the incident, Sullivan did not report the data breach to authorities, even though at the time the Federal Trade Commission (“FTC”) had been conducting an investigation of Uber following a previous breach that occurred in 2014. At the time that Uber’s security group was investigating the hack, Sullivan allegedly told a subordinate that information about the breach needed to be “tightly controlled” and that the story told to others outside of Uber’s security group was that, “this investigation does not exist.” Sullivan did disclose the breach to Uber’s then-CEO, Travis Kalanick, and to the in-house lawyer who was responsible for determining whether the data breach needed to be reported. But Sullivan never notified Uber’s general counsel or the Uber lawyers assigned to work on the FTC’s investigation.

Sullivan was fired in 2017 by Uber CEO Dara Khosrowshahi, who testified at trial that he fired Sullivan for hiding key details about the hack in an “incomplete or misleading” email. Uber eventually reported the breach to the FTC in 2017 after conducting an independent investigation. In 2018, Uber paid $148 million to settle claims brought by 50 U.S. states and the District of Columbia alleging that it had been slow to disclose the data breach.

In 2020, Sullivan was charged with one count of criminal obstruction and one count of concealment of a felony. A jury found Sullivan guilty on both charges. In a statement following the jury’s verdict, US Attorney for the Northern District of California, Stephanie Hinds, said that “Sullivan affirmatively worked to hide the data breach from the Federal Trade Commission and took steps to prevent the hackers from being caught. We will not tolerate concealment of important information from the public by corporate executives more interested in protecting their reputation and that of their employers than in protecting users.”

Takeaway: While the prosecution and conviction of the Uber’s CISO was focused on intentionally misleading conduct, the criminal prosecution of a CISO is bound to strike fear into the hearts of CISOs and add an extra level of pressure to those on the front lines of dealing with data breaches. It is a stark reminder to companies that communications about data breaches both within and outside the company need to be truthful.

White House Issues Blueprint on Artificial Intelligence (“AI”) Bill of Rights

On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP“) released a white paper, “Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age.” The Blueprint is a set of non-binding guidelines developed after a year-long process that sought input from experts in academia, the public and private sectors and within communities across the U.S. Information was gathered, among other means, through panel discussions, a formal RFI and other investigation.

A set of five principles developed by the OSTP and directed towards citizens potentially affected by use of AI technology embodies the core of the Blueprint:

  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The Blueprint is intended to support the development of policies and practices that will guide the design, development, and deployment of automated systems, to protect the rights of the public. The OSTP noted that harms caused by flawed AI programs could affect citizens’ ability to exercise lawful rights, to obtain equitable access to important opportunities and critical resources or punish or burden them unfairly. The “bill of rights” describes protections that governments should apply with respect to automated systems that have the potential to meaningfully impact individuals’ or communities’ exercising of these rights and opportunities.

Takeaway: This AI “bill of rights” brings into sharp focus that lawmakers and policy makers have eyes set on regulated industries and sectors where AI outcomes can result in disparate impacts on members of historically marginalized communities. Increased regulatory interest in AI poses potential threats to innovation through burdensome regulation (see our article on this development in (see our article on this development in Issue 14).

First Bi-Lateral Data Agreement Under US CLOUD Act Goes into Effect

On October 3, 2022, the agreement between the U.S. and UK Governments on Access to Electronic Data for the Purpose of Countering Serious Crime (“Data Access Agreement” or “DAA”) entered into legal force. This is the first bi-lateral data agreement negotiated after the 2018 enactment of the U.S. Clarifying Lawful Overseas Use of Data (CLOUD) Act and it is expected to be a model for the U.S. and other countries.

The DAA was originally signed on October 3, 2019 (see more here) and required three additional years of negotiations to reach final agreement. And even under the ‘final’ agreement certain rules, including those relating to UK targeting and minimization requirements, will be addressed in future negotiations. This delay reflects the challenges in negotiating bi-lateral data agreements even for countries with a “special relationship” and similar legal systems. It will be interesting to see whether the U.S.-UK agreement acts as an accelerant for implementation of the 2021 U.S.-Australia agreement and on-going negotiations between the U.S. and the EU and Canada on similar data access agreements.

This news summary cannot fully address the complex DAA rules and procedures. Here, however, are a few important highlights to note:

  • Data access is permitted only for the detection, investigation and prosecution of serious crime including terrorism, organized crime and child exploitation.
  • U.S. requests must not target persons in the UK and UK requests must not target persons in the U.S.
  • Data requests must comply with the domestic law of the Issuing Party and are subject to review by a court, judge, magistrate or other independent authority.
  • Cloud service providers retain the right to raise legal objections to data requests and the DAA does not create a private right of action.
  • Issuing parties may not submit requests for or transfer the results of a request to third-party government or international organizations.

Takeaways: Companies operating in the U.S. and UK that rely on U.S. cloud service providers should take steps to confirm what company data is hosted where to ensure that continued cloud storage remains consistent with business objectives, including cybersecurity. Moreover, the DAA does not mandate any public reporting of data requests, so companies should know their cloud providers' current policies and monitor their public reporting, understanding the data reported is likely to be high-level and anonymized.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide