The legal risks associated with cybersecurity continue to increase, as regulators and plaintiffs’ lawyers become more and more aggressive in bringing cybersecurity claims under existing laws and as legislatures continue to enact new ones. A key element of many of the cybersecurity claims brought under these laws is a requirement to show that the company in question failed to implement “reasonable” security for personal information. California’s new Consumer Privacy Act (“CCPA”), for instance, allows consumers to sue businesses for statutory damages when specified types of personal information are subject to unauthorized access and exfiltration, theft, or disclosure because of a failure to implement and maintain “reasonable” security measures and the business has not cured the alleged violation within the CCPA’s pre-suit period. Cal. Civ. Code § 1798.150. Even though consumers often suffer no injury in a data beach, the CCPA provides for statutory damages of $100–$750 per consumer per incident.
But what, exactly, is the legal test for determining whether a company has implemented “reasonable” cybersecurity? Unfortunately, the answer is not clear. And this is a particularly serious problem given that the consequences of being found not to have “reasonable” security in place can be so severe.
In a new paper just released for public comment, Commentary on a Reasonable Security Test, the Sedona Conference—a renowned research and educational institute dedicated to the advanced study of law—seeks to fill the gap by proposing a test for “reasonable” security. The proposed test is of use not only to adjudicators tasked with applying the nebulous “reasonable security” requirement, but also to businesses and other entities seeking to assess whether they pass the requirement.
The Commentary explains that its proposed test is designed to be consistent with models for determining “reasonableness” that have been used in various other contexts by courts, in legislative and regulatory oversight, and in information security control frameworks. In that regard, the Commentary notes that all of these regimes have used a form of risk analysis to balance the costs and benefits of a proposed course of action. Consistent with those approaches, the Commentary posits a cost/benefit test for reasonableness, namely, that an entity’s “information security controls for personal information are not reasonable when implementation of one or more additional or different controls would burden the [entity] and others by less than the implementation of such controls would benefit the claimant and others.”
At the same time, the Commentary also acknowledges that courts have often looked to industry custom to inform a reasonableness analysis. It therefore suggests that noncompliance with custom can establish that an entity’s security was not reasonable, although the entity is free to counter the effect of this evidence with, among other possible arguments, a cost/benefit analysis. The Commentary also notes that, in some instances, legislatures and regulatory agencies have already deemed particular security controls to be worth the cost of implementation and have required them by statute, regulation, or ordinance. The Commentary therefore suggests that evidence of noncompliance with such a law requiring implementation of specific security controls will be sufficient to establish a presumption that an entity’s security measures were not reasonable.
To demonstrate the practical utility of the proposed test, the Commentary includes three illustrations in an Appendix, where it shows how the test could be applied to particular hypothetical facts.
Orrick partner Doug Meal, head of our cyber and privacy litigation practice, and David Cohen, Of Counsel in our cyber and privacy litigation practice, were both heavily involved in the preparation of the Commentary. Doug is the Chair of the Steering Committee for the Sedona Conference’s Working Group 11, which produced the Commentary, and he serves as the Steering Committee Liaison to the team that drafted the paper. That drafting team included David, as well as a diverse group of judges, lawyers, and information security professionals.