Last week’s Federal Trade Commission (FTC) Open Commission Meeting (Open Meeting) featured a number of agency developments that will drive its approach to privacy, data security, and AI/algorithmic decision-making during Chair Lina Khan’s tenure. These include an enhanced rulemaking petition process and guidance on health breach notification for health apps and connected devices, on top of a streamlined process for data and algorithmic bias investigations announced the day before the Open Meeting. These developments follow the adoption in July of streamlined rulemaking processes that can be used by the agency for implementing privacy or other data-related rules.
All these changes lay the foundation for much greater activity on both the enforcement and rulemaking side when it comes to privacy, security, algorithms, and data governance.
At the FTC’s September 15 Open Meeting, the agency approved several changes that will directly impact privacy, data security, and AI. Additionally, the FTC withdrew the 2020 Vertical Merger Guidelines, which will have a significant impact on how the agency reviews market competition and mergers.
- The FTC approved a health app policy statement, stating that it will apply its existing Health Breach Notification Rule to health apps in the marketplace that are not covered by the Health Insurance Portability and Accountability Act (HIPAA). The Rule requires notification to consumers and the FTC upon unauthorized disclosures of covered health data – which the FTC will now interpret to include both data breaches and discovery of certain privacy violations. We summarize the Rule and its potential impact in more detail here. The vote was 3-2.
- The FTC established a process for considering external rulemaking petitions by a 4-1 vote. Under the new process, formal petitions for rulemaking will be published in the Federal Register and placed on a 30-day public comment period, after which the FTC will publicly announce whether it will go forward with a rulemaking. While external petitions can inform the FTC’s work, there has been no formal process for resolving them in the past, and no way for other stakeholders to effectively engage. The procedural change could be significant – it will likely incentivize greater use of rulemaking petitions, since an official response is now guaranteed. In the past, external groups have pushed for rulemaking in areas like AI, and these kinds of petitions will now get much more attention. Industry stakeholders should expect increased regulatory activity in the areas of privacy, security, algorithms, and data governance more generally.
These developments come just days after the FTC announced the adoption of streamlined investigation and enforcement procedures under new investigation resolutions in certain areas. These include:
- Algorithmic and Biometric Bias: One new resolution allows staff to investigate allegations of bias in algorithms and biometrics. As we have previously discussed, algorithmic bias has been a key area of focus for the FTC, and enforcement activity has been previewed.
- Children under 18: Another resolution allows the staff to address harmful conduct directed at children under 18. This resolution is notable because the primary children’s privacy law – the Children’s Online Privacy Protection Act (COPPA) applies to children under the age of 13. Some lawmakers have pushed for the expansion of COPPA to older minors, and it remains to be seen whether the FTC will investigate privacy issues with older minors as well.
- Deceptive and Manipulative Conduct on the Internet. This resolution expands the scope of previous resolutions to include the “manipulation of user interfaces,” including but not limited to “dark patterns,” which were recently the subject of an FTC workshop. Although critics have complained that these “dark patterns” are used to obtain consumer data without consent, the FTC has not defined which user interface designs are sufficiently “manipulative,” and it is not clear what would fall within the scope of deceptive conduct under the FTC Act.
All of these developments follow the FTC’s vote to streamline its rulemaking processes in July. As we previously explained, in July the FTC approved procedural changes to streamline its FTC Act Section 18 rulemaking process related to unfair or deceptive practices – often known as “Magnuson-Moss” rulemaking. The changes mean that this rulemaking process will proceed more quickly, though it still will be more cumbersome than traditional Administrative Procedure Act (APA) rulemaking. The passage and debate is another signal that the current Commission intends to be aggressive on rulemaking, including potentially on privacy and algorithms.
These changes all may appear technical – for now. But they are all aimed at establishing FTC authority to launch sweeping changes in data privacy and security, and use of algorithms. As the dissenting Commissioners have previewed, there will be serious questions about the FTC’s authority along the way.