Dechert Cyber Bits - Key Developments in Privacy & Cybersecurity - Issue 48

Dechert LLP

Articles in this issue

  • FTC Announces Proposed Settlement with Data Aggregator over its Alleged Selling of Precise Location Data
  • EDPB Focuses on Strengthening the DPO Role
  • UK ICO Launches Consultation Series on Application of Data Protection Law to Generative AI
  • FTC Joins Multinational Privacy, Security Enforcement Effort
  • CISA Releases 2023 Year in Review

FTC Announces Proposed Settlement with Data Aggregator over its Alleged Selling of Precise Location Data

The Federal Trade Commission (“FTC”), on January 18, 2024, announced a proposed settlement with InMarket Media (“InMarket”) to resolve allegations that InMarket had violated Section 5 of the FTC Act in connection with its collection and use of consumer location data via software development kits (“SDKs”) and the purchase of location data from other sources. According to the FTC’s complaint (“Complaint”), InMarket collects precise consumer location information to facilitate targeted advertising for its customers. The Complaint alleges that InMarket: (i) failed to provide notice that the consumer’s location data was used to develop targeted advertising; (ii) failed to take reasonable steps to ensure that consumers of third-party apps that used InMarket’s SDK were notified that their information was being collected to facilitate targeted advertising; (iii) saved sensitive information for longer than reasonably necessary; and (iv) represented that consumers’ location information would be used only to gain points or receive list reminders, when in fact it was used for other purposes. InMarket has not admitted any wrongdoing in connection with the settlement.

Under the FTC’s proposed decision and order (“Proposed Order”), InMarket would be prohibited from: (i) materially misrepresenting the extent to which it collects and deidentifies location data; and (ii) selling or licensing location data or any products or services that target consumers based upon location data. InMarket would also need to comply with other requirements, which would include, among other things: (i) implementing a sensitive location data program; (ii) recording a consumer’s affirmative express consent to collect their location data, and providing consumers with a clear reminder every 6 months regarding the collection of their data; (iii) implementing an assessment program that ensures a consumer’s consent has been provided when their location data is gathered; (iv) providing consumers with a simple way to withdraw their express consent or request deletion; (v) notifying each consumer whose location data was collected without express consent about InMarket’s settlement with the FTC; and (vi) deleting all historic location data unless there is express affirmative consent from the consumer.

Takeaway: The FTC’s settlement with InMarket comes on the heels of this month’s enforcement action with X-Mode, which we discussed here. Taken together, these enforcement actions highlight the FTC’s focus on sensitive location data and the need for such companies to make sure they’re working with appropriate consents and have provided transparency to consumers. The Order imposes obligations that go above and beyond any legal requirement (even the strictest laws), such as the requirement that the company send a reminder “at least every 6 months” to consumers regarding the collection of their data – something that likely is unwieldy and a real competitive disadvantage. Query whether the FTC should be imposing such burdens on a company that otherwise have no basis in law.

EDPB Focuses on Strengthening the DPO Role

The European Data Protection Board (“EDPB”) has released a report highlighting concerns about the adequacy of resources and training provided to Data Protection Officers (“DPOs”). The report, which is based on over 17,000 stakeholder responses, is the result of a coordinated enforcement action launched in March 2023 to assess whether DPOs have the necessary position and resources within their organizations to fulfill their tasks as required by Articles 37 to 39 of the EU GDPR.

While the report is positive overall, the EDPB report outlines the following key areas of concern: absence of a designated DPO; insufficient resources including guidance from data protection authorities; inadequate training and knowledge building; DPOs not being fully or explicitly entrusted to perform tasks required by the GDPR; conflicts of interest and lack of independence; and the DPO not reporting to the organization’s highest management.

The report also details enforcement actions taken by ten EU member state supervisory authorities to address noncompliance, including orders to appoint a DPO, reprimands, fines, and guidance for cases of non-compliance with the legal requirements for DPOs.

Takeaway: The EDPB’s report and related enforcement actions by supervisory authorities demonstrates the importance that they attach to the role of the DPO, seeing the role as bridging the gap between EU data protection law and its practical application. The concerns outlined in the report may be further amplified by new tasks relating to AI and data governance reportedly being allocated to DPOs under new legislation, for example, the AI Act and Digital Services Act. Organizations will want to review the requirements to appoint a DPO and, if required to have one, review the position and resources allocated to the DPO in light of the concerns raised in the report.

UK ICO Launches Consultation Series on Application of Data Protection Law to Generative AI

On January 15, 2024, the UK Information Commissioner’s Office (“ICO”) launched a consultation series on generative AI. The consultation series’ aim is to gather views and responses from stakeholders in the generative AI sector, including developers, users, legal advisors or other consultants, and other public bodies.

The first consultation, open until March 1, 2024, examines the lawfulness of using personal data scraped from the internet to train generative AI models. The ICO provided a summary of its initial analysis of the issue and made an online survey available to stakeholders, split into four sections of interest: the respondent’s view on the ICO’s proposed regulatory approach; impacts on the respondent’s organization; details on the respondent; and final comments. The ICO’s analysis reviews the potential use of legitimate interests as a lawful basis for training generative AI models on scraped personal data, concluding that it may be feasible to rely on this basis in some circumstances but that clear evidence of the balancing of interests will be needed. Given that this is an ‘invisible processing’ activity (that is, people are not aware that their personal data is being processed in this way) and related to AI, the ICO also notes that these are high-risk activities that require a Data Protection Impact Assessment.

Future consultations, which will run through mid-2024, will examine other related areas such as the accuracy of generative AI outputs.

Takeaway: The consultation analysis represents the ICO’s “emerging thinking,” and they make clear that the analysis should not be interpreted as confirmation that any data processing practice is legally compliant. Nonetheless, businesses with an interest in generative AI may want to take account of this initial analysis in guiding their position regarding the use of scraped personal data in generative AI and should consider taking the opportunity to weigh in on the consultation before the March 1, 2024 deadline.

FTC Joins Multinational Privacy, Security Enforcement Effort

On January 17, 2024, the Federal Trade Commissions (“FTC”) announced that it will be participating in the Global Cooperation Arrangement for Privacy Enforcement (“Global CAPE”). This is a nonbinding agreement that will allow the FTC to cooperate with other members of the Global CAPE regarding privacy and data security enforcement matters without needing to create separate memoranda of understanding with each member. The FTC voted 3-0 to authorize agency staff to participate in the Global CAPE.

In its Press Release announcing its participation in the Global CAPE, the FTC explained that this agreement allows it to keep up with the “increasingly global nature of commerce.” The Global CAPE was created to allow for cooperation with countries outside of the Asia-Pacific region by supplementing the Asian Pacific Economic Cooperation (“APEC”) Cross-Border Privacy Rules—an existing agreement that only facilitates cooperation amongst APEC’s Asian Pacific countries.

Takeaway: The FTC’s participation in Global CAPE signifies its continued interest in partnering with regulatory authorities in different jurisdictions to enforce privacy, cybersecurity, and consumer protection matters. Multinational corporations should take note of this development. As FTC staff and Global CAPE members are permitted to assist and share information with other relevant privacy/security authorities that participate in the program, a regulatory inquiry that originates in one corner of the world regarding questionable data practices could potentially be shared with the FTC, possibly leading to a regulatory inquiry in the United States.

CISA Releases 2023 Year in Review

The Cybersecurity and Infrastructure Security Agency (“CISA”) released a 2023 Year in Review to outline its efforts to protect critical infrastructure across the United States. According to CISA Director Jen Easterly, “[t]his Year in Review report demonstrates CISA’s exceptional work in 2023 to protect critical infrastructure” and “not only celebrates our progress from the past year but also spotlights groundbreaking milestones and pioneering ‘firsts’ achieved by the agency.” CISA claims to have made advances in a number of areas related to cybersecurity including, among other things:

  • The Secure by Design Campaign: Promoting secure software development by urging the development of secure design products and pushing companies to own customer security outcomes, be transparent and accountable, and lead from the top;
  • A Roadmap for Artificial Intelligence (“AI”): Creating an agency-wide plan for assessing cybersecurity risks of, providing guidance for, and utilizing the benefits of AI;
  • The Pre-Ransomware Notification Initiative: Catching ransomware activity early and providing companies with prompt notice;
  • The Secure Our World Program: Emphasizing the importance of strong password usage, multifactor authentication, catching and reporting phishing, and updating software;
  • The Priority Telecommunications Services Program: Adding subscribers to the program which gives essential personnel the ability to communicate when landlines and networks are down;
  • The “Target Rich” Initiative: Increasing engagement between the government and private sectors and the high-risk sectors of water and wastewater, education, healthcare, and elections to promote cybersecurity services;
  • The State and Local Cybersecurity Grant Program: Implementing a new grant program jointly with the Federal Emergency Management Agency to bolster cybersecurity;
  • Election Security Advisors (“ESAs”): Promoting secure elections by having ESAs ensure that CISA’s services are being used optimally; and
  • The ChemLock Program: Providing facilities that possess dangerous chemicals with free tools to improve cyber and physical security.

The Year in Review also recognizes that 2024 brings continued and evolving threats. These include threats to critical infrastructure from advanced persistent threat actors, as well as from severe weather and other natural hazards. CISA anticipates that developments in artificial intelligence will provide help in combatting these ongoing threats, while simultaneously exacerbating the harms due to increased opportunities for cyber hackers.

Takeaway: CISA had a busy 2023, and its fourth annual Year in Review outlines its accomplishments. These accomplishments – while important – perhaps are generously described by CISA as compared to what we are seeing on the ground. They do reflect CISA’s continued importance as the government’s lead agency in protecting the United States’ critical infrastructure security and resilience. Companies in industries that CISA deems critical infrastructure, such as healthcare, government, financial services, and information technology, should expect further regulations. They also should take note that yet another regulator is interested in understanding and regulating their use of artificial intelligence.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide