Developments in Cybersecurity: Privacy Laws, Hacking Beyond Customer Data, and Communicating with Corporate Boards

Carlton Fields
Contact

I. Legal Exposure to Federal and State Privacy Laws 

A. Federal Statutes and Enforcement

1.  Federal Trade Commission Act, 15 U.S.C. §§ 41-58

The Federal Trade Commission (FTC) has emerged as the leading federal regulator for privacy and data security.  The FTC enforces a number of statutes that relate to, or provide the basis for, enforcement proceedings related to privacy and data security.  Those statutes include the Federal Trade Commission Act, 15 U.S.C. §§ 41-58.

Section 5 of the Federal Trade Commission Act grants the FTC authority to prevent unfair or deceptive acts or practices in commerce.  15 U.S.C. § 45.  The FTC is empowered to bring enforcement actions under Section 5 and obtain various forms of relief, including equitable remedies.  Id.  The FTC has commenced nearly 100 such actions against entities for failing to protect their consumers’ privacy and personal information, breaching the entities’ privacy policies or other representations about data privacy, and similar violations.  The FTC’s website (www.ftc.gov) includes a number of resources addressing the agency’s efforts in this area.  

2. Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq.

The Fair Credit Reporting Act (FCRA) requires fair and accurate credit collection and reporting, and it aims to protect consumer privacy.  15 U.S.C. § 1681.  The FCRA, which applies to “consumer reporting agencies” and furnishers of information to such agencies (e.g., credit card issuers, car dealers, etc.), requires “consumer reports” to be accurate and limits the dissemination of those reports to certain circumstances (such as where directed by the consumer) where a third party intends to use the report in connection with a credit transaction with the consumer, and so forth.  Id. §§ 1681b, 1681e, 1681s-2.   

The FCRA includes a private right of action and provides various remedies for violations, including damages, costs, and attorneys’ fees for a successful action.  Id. §§ 1681n-1681p.  The Act also empowers the FTC and other federal agencies, including the Consumer Financial Protection Bureau (CFPB), to bring enforcement actions against violators.  Id. § 1681s.  States are empowered to maintain enforcement actions under the Act as well.  Id.

3. Gramm-Leach-Bliley Act, 15 U.S.C. §§ 6801-6809

The Financial Services Modernization Act, more commonly referred to as the Gramm-Leach-Bliley Act (GLBA), was enacted in 1999 and generally imposes obligations on financial institutions to safeguard the nonpublic personal information of their customers and consumers.  15 U.S.C. §§ 6801-6809.  “Financial institutions” are subject to GLBA; this is a broad term that encompasses companies that provide loans, financial or investment advice, insurance, and other financial products or services.  Id. § 6809(3); 12 U.S.C. § 1843(k).  The distinction between “customers” and “consumers” turns on the extent of the relationship with the financial institution, and there are different requirements under GLBA depending on which constituency is at issue.  Id. § 6809(9), (11). 

GLBA and its implementing regulations generally require the protection of “nonpublic personal information” and govern the uses of that information by financial institutions (particularly their ability to share the information with third parties).  The former set of regulations are referred to as the Safeguards Rules, while the latter are referred to as the Privacy Rules.  The Safeguards Rules establish standards for financial institutions to ensure the security and confidentiality of customer records and information.  Id. § 6801(b).  The Privacy Rules address, among other things, the frequency and manner in which a financial institution must provide its customers and consumers with that institution’s privacy policy and how the institution’s customers and consumers may prohibit the sharing of their information with third parties.  Id. § 6803.   

The CFPB bears primary responsibility for promulgating rules and enforcing GLBA, although other federal agencies, such as the FTC and the Securities and Exchange Commission (SEC), possess authority under the GLBA for entities subject to those agencies’ oversight.  Id. § 6805.   

For example, the SEC has issued regulations pursuant to GLBA for registered investment advisers, brokers, dealers, and investment companies.  Those regulations include obligations to establish written policies and procedures reasonably designed to protect customer data.  17 C.F.R. § 248.30(a).  In 2015, the SEC used that regulation to bring an enforcement action against an investment adviser for failure to implement any such policies and procedures and suffering a data breach, which resulted in the exposure of personally identifiable information for the firm’s clients and others.  As part of the enforcement action, the investment adviser agreed to a $75,000 penalty. 

Meanwhile, GLBA permits states to implement similar statutes that are more protective than the Act.  Id. § 6807.  In that way, the GLBA sets a floor, with some states, such as California, availing themselves of the opportunity to pass more stringent protections.  See Cal. Fin. Code §§ 4050-4060.  

4.  Communications Act of 1934, 47 U.S.C. §§ 151 et seq.

The Federal Communications Commission (FCC) enforces privacy and data security provisions contained within the Communications Act of 1934.  47 U.S.C. §§ 151 et seq.  Those provisions require telecommunications carriers to protect their customers’ personal information, also referred to under the Act as “customer proprietary network information” (CPNI).  Id. § 222(a), (c).  CPNI includes information related to a customer’s use of a telecommunications service and information contained in the bills for that service.  Id. § 222(h)(1).  In general, telecommunications carriers may only use or disclose CPNI as necessary to render their services or in response to a written request by the customer.  Id. § 222(c); see also 47 C.F.R. § 64.2005.  The FCC’s regulations require carriers to protect against attempts to gain unauthorized access to CPNI, and those regulations impose mandatory law enforcement notification and response requirements on carriers.  47 C.F.R. §§ 64.2010-.2011.  

The FCC has instituted enforcement actions against carriers that fail to abide by these provisions.  In one case, the FCC obtained a $25 million civil penalty from a carrier that failed to protect the confidentiality of several hundred thousand customers’ personal information, which was compromised in data breaches at call centers outside the United States.

5. HIPAA, Pub. L. No. 104-191, 110 Stat. 1936 (1996), and HITECH Act, incorporated into 45 C.F.R. Parts 160 & 164 

The Health Insurance Portability and Accountability Act (HIPAA), Pub. L. No. 104-191, 110 Stat. 1936 (1996), and the Privacy Rule promulgated pursuant to it, 45 C.F.R. Part 160 and Part 164, Subparts A and E, apply to “covered entities,” which include health care providers, pharmacies, health insurers, HMOs, and health care clearinghouses.  45 C.F.R. § 164.500 et seq.  The Privacy Rule prohibits covered entities from disclosing without authorization protected health information (PHI) that has not had identifying features removed from that information.  The Rule also requires covered entities, in contracting with “business associates,” such as plan administrators, transcription companies, and other service providers, to safeguard PHI provided to them (those business associates are also subject to regulation in their own right).  And, covered entities must develop written policies and procedures and designate a “privacy official” responsible for them, as well as establish reasonable and appropriate safeguards to prevent improper uses and disclosures of PHI.  Id. 164.530. 

The Security Rule, id. § 164.302 et seq., also promulgated pursuant to HIPAA, imposes additional obligations on covered entities to include drafting written policies and procedures for the protection of electronic PHI, training employees, and conducting risk analyses (with implementation of appropriate measures to address risks identified therein).     

The Health Information Technology for Economic and Clinical Health (HITECH) Act, incorporated into 45 C.F.R. Parts 160 & 164, became law in 2009 and expanded the scope of HIPAA’s privacy and security provisions, while also increasing liability for non-compliance and strengthening enforcement.  For example, the HITECH Act contains a breach notification requirement, whereby covered entities must report breaches affecting 500 or more individuals to those individuals, the Department of Health and Human Services (HHS), and media outlets.  Id. § 164.400 et seq.  Breaches involving fewer than 500 individuals must be reported on a yearly basis to HHS.  Id. § 164.404 et seq. 

Although there is no private right of action under HIPAA or the HITECH Act, HHS’ Office for Civil Rights (OCR) routinely brings enforcement actions for violations of these statutes and their implementing regulations by covered entities and business associates (state attorneys general may also bring actions).  There are tiered penalties assessed per violation, with a maximum of $1.5 million.  Id. § 164.400 et seq.; 42 U.S.C. § 17931 et seq.; 42 U.S.C. § 1320d-5.  Recent investigations into data breaches and other data loss events have resulted in payments into the millions of dollars, including one that involved the theft of a single unencrypted laptop containing electronic PHI. 

6.  Cybersecurity Information Sharing Act of 2015

Although it is not a federal privacy law per se, the Cybersecurity Information Sharing Act of 2015 (CISA), currently available as part of the Consolidated Appropriations Act,  Pub. L. No. 114-113, 129 Stat 2242 (2016), bears mention.  CISA established a mechanism for sharing between the private sector and the federal government information related to cyber threats.  Specifically, CISA called for the creation by the Department of Homeland Security (DHS) of a means by which the private sector could share information about cyber threats and defensive measures implemented in response to those threats.  On February 16, 2016, DHS issued guidance as to how companies can report that information electronically to the federal government through a dedicated portal and receive the full protections afforded by the Act. 

Those protections include a provision that no cause of action can be brought against private entities that conduct activities authorized by and in accordance with the Act.  One requirement under CISA is that companies remove any known personally identifiable information before they share the cyber threat information with the government.  CISA ensures that applicable legal privileges and protections, such as trade secret protection, will not be waived by sharing information with the federal government.  Further, the Act contains provisions exempting (i) shared information from disclosure under the Freedom of Information Act, and (ii) participating private entities from antitrust laws.  CISA states that cyber threat information shared with the federal government will not be used to regulate lawful activities.  CISA also makes clear that participation by private entities is voluntary. 

7. EU-U.S. Safe Harbor and Privacy Shield

The transfer of personal data from foreign jurisdictions, particularly the European Union (EU), to the United States poses significant challenges for companies.  Those challenges include potential liability for noncompliance with foreign privacy laws, which may be more stringent than similar domestic statutes.  In the EU, data can only be transferred to countries with adequate protections for personal data.  Historically, the EU did not consider the United States sufficiently protective of personal data, thereby complicating efforts to transfer such data from the EU to the United States.  From 2000 to 2015, the Safe Harbor Agreement between the EU and United States addressed that problem by permitting companies to transfer personal data from the EU to the United States so long as companies agreed to satisfy a set of privacy principles.  Those principles were enforced by the FTC. 

In October 2015, the European Court of Justice invalidated the Safe Harbor Agreement over concerns about the extent of government surveillance in the United States and what that surveillance meant for the privacy of personal data.  Case C-362/14, Schrems v. Data Protection Comm’r, 2015 E.C.R. I-650.  Without the Safe Harbor Agreement, the EU and United States returned to the status quo, which meant that although data transfers were not automatically enjoined, there was great uncertainty in how to comply with myriad European privacy laws.  In February 2016, however, the EU and United States agreed on a framework to replace the Safe Harbor Agreement.  That framework will be known as the Privacy Shield.  It imposes various requirements on American companies to protect the personal data of Europeans.  The Privacy Shield, which remains subject to final approval, also calls for stronger monitoring and enforcement by the Commerce Department and FTC, with a mechanism for EU citizens to lodge complaints and to pursue alternative dispute resolution.  Complaints must be resolved within forty-five days.

B. State Statutes, Enforcement, and Litigation

1. State Privacy and Data Security Statutes

Many states have enacted privacy and data security statutes that supplement the statutes outlined above and are in addition to states’ more general consumer protection statutes.  Some of those statutes pertain to the privacy of financial information.  These state analogs are generally more protective of consumers than GLBA. 

Meanwhile, there are state privacy statutes governing medical records and other health information, which supplement HIPAA and the HITECH Act.  For instance, the California Confidentiality of Medical Information Act (CMIA), Cal. Civil Code § 56 et seq., imposes a number of requirements regarding the disclosure of medical information by health care providers in that state.  Notably, CMIA includes both a criminal enforcement provision, a civil enforcement provision, and a private right of action.  

Like California, Texas has a medical records privacy statute.  Tex. Health & Safety Code § 181.001 et seq.  Although the Texas statute provides for civil enforcement – with the possibility for significant fines – it does not include a private right of action.

 2. State Breach Notification Statutes

In addition to privacy statutes and other laws imposing an affirmative obligation to protect personal data, there are a patchwork of notification statutes that could apply in the event of a breach.  Presently, there are breach notification statutes in forty-seven states, as well as in the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands (the three states without such a law are Alabama, New Mexico, and South Dakota).  The residency of the affected individuals is frequently the trigger for whether one or more of these statutes applies, making it important to know generally where one’s consumers and employees are located.  While many of these statutes are similar, there are certain important differences in terms of (i) what is protected information, (ii) the form of protected information, (iii) notification requirements (deadlines, whom to notify, and so forth), and (iv) whether there is a private right of action.  These differences are noted briefly below, but, as a practical matter, in the wake of a breach, a company will comply with the most stringent notification statute triggered by the event.

a. Protected Information

In general, state breach notification statutes apply to a person’s name plus one other identifying feature, such as that person’s Social Security number, driver’s license number, or financial account number with access information.  For example, Florida’s notification statute applies to medical and health information, financial information, military information, insurance information, and online account information.  Notably, Florida’s notification requirements could be triggered where data is compromised that does not include an individual’s name.  So long as the compromised information would enable identity theft, the notification statute applies.  

Florida is not alone in including medical and health information within this definition, as a number of other state breach notification statutes contain similar definitions of protected information.  Also, Florida and at least two other states include, or will include, email addresses with password information among their definition of protected information. 

Other states’ notification statutes include additional items within their definitions of protected information, such as biometric data.  North Dakota, for example, goes so far as to include a person’s digital or electronic signature, employment ID number, birthdate, and parent’s surname before marriage. 

b. Form of Protected Information

In general, state breach notification statutes apply to compromised personally-identifying information in electronic format (if not encrypted), although some laws also apply where that data resides in any form, including hard copy paper records.

c. Notification Requirements

If triggered, the various state statutes impose a range of requirements in terms of whom must be notified and the deadlines for doing so.  For notifying the affected individuals, the statutes often direct action without specifying a number of days.  States in this category include Alaska and Oregon, which both require notification “in the most expeditious time possible and without unreasonable delay.”  Others specify a number of days.  Florida allows up to thirty days after discovery of the breach, but an additional fifteen days are available for “good cause.”  Ohio and Wisconsin permit forty-five days.   

In addition to notifying the affected individuals, other constituencies must be informed under certain state statutes.  For example, many states require notifying credit agencies under various conditions (e.g., a breach involving more than 1,000 affected persons).  Other state entities, such as the state attorney general and/or the state police, must be notified under various state laws. 

Often, a state’s statute will permit a delay in notification if law enforcement is consulted and deems such a delay to be warranted because of an ongoing investigation. 

d.  Private Right of Action

Some state notification statutes expressly create a private right of action, while others explicitly bar such an action.  In the middle of that spectrum are the remaining notification statutes, which are silent on the issue.  In those states, the attorneys general typically will enforce the notification statutes, which generally provide for fines or other penalties.   

3. Enforcement Actions and Other Litigation

State attorneys general have become important figures in enforcing the statutes described above.  In the wake of a data breach or similar event, it is not uncommon for the attorneys general in the affected states to launch investigations, possibly culminating in an enforcement action or actions against the company. 

Supplementing those investigations and proceedings are private actions filed by affected individuals.  Those private actions often involve an array of claims, both provided for by statute and under common law.  Typical claims in the former category include claims premised on violations of state privacy and data security statutes, as well as consumer protection and unfair competition statutes.  The claims also may include violations of any applicable notification statutes.  The common law claims frequently include negligence, breach of contract, breach of implied contract, breach of fiduciary duty, and unjust enrichment.  The theories underpinning these various claims are generally that the company (i) failed to adequately protect the personal data that was compromised, and (ii) failed to respond to the breach or other event in an appropriate and timely manner.

Thus far, litigation following a data breach has frequently focused on the issue of standing, especially on whether the plaintiffs can show an injury-in-fact.  This requirement can be challenging for plaintiffs who cannot show any harm caused by a breach (such as identity theft or fraudulent charges on their credit card) but who instead allege a generalized fear of future harm.  Where a risk of future harm is alleged, the Supreme Court has said (in a different context but nonetheless relevant to data breach litigation) that there is no standing unless the plaintiffs can show that the feared future injury was “certainly impending.”  Clapper v. Amnesty Int’l USA, 133 S. Ct. 1138, 1143, 1147-50 (2013).  The Supreme Court also said in Clapper that plaintiffs cannot manufacture standing by incurring costs in anticipation of that feared future injury.  Id. at 1151.  In the data breach context, such costs might be purchasing credit monitoring or taking other prophylactic measures. 

Following Clapper, many federal courts have dismissed, on standing grounds, data breach litigation alleging a fear of future injury.  See, e.g., Whalen v. Michael Stores Inc., 14-CV-7006 (JS)(ARL), 2015 WL 9462108 (E.D.N.Y. Dec. 28, 2015); In re Zappos.com, Inc., 108 F. Supp. 3d 949 (D. Nev. 2015); Peters v. St. Joseph Servs. Corp., 74 F. Supp. 3d 847 (S.D. Tex. 2015).  Other courts, however, have permitted such litigation to proceed where the claimed fear of future injury is reasonable.  See, e.g., Remijas v. Neiman Marcus Grp., LLC, 794 F.3d 688 (7th Cir. 2015); In re Adobe Sys., Inc. Privacy Litig., 66 F. Supp. 3d 1197 (N.D. Cal. 2014). 

Aside from consumer or employee litigation in the wake of a data breach, companies may also face suits from financial institutions that issued credit or debit cards that needed to be replaced as a result of the breach. 

II. Risks of Hacking Beyond Privacy: Vulnerability of Products and Business Models

Under the legal and regulatory framework discussed above, legislatures and regulators have focused on the loss of customer or employee data.  This framework has developed in response to voter and citizen concern about identity theft, and addresses, in large part, massive breaches of personally identifiable information.  These threats have been well-known since at least the TJX Companies Inc. suffered a breach of its point-of-sale payments systems in 2007, exposing 45 million credit card numbers. 

Less well known are the cyber risks that do not target customer data.  These risks are typically malicious and criminal in origin, and aim to steal the company’s proprietary data, shut down its communications systems or website, or otherwise damage the company’s reputation.  Related concerns are the use of a company’s systems to commit crimes, and the potential effect on a company’s reputation from cooperating with the government in a criminal case.  A brief overview of these risks follows, and this section will conclude with potential risks when a company’s products incorporate connected technology but things go wrong, causing customers damage. 

A. Compromised Trade Secrets and Other Intellectual Property

One of the principal risks to a corporation’s bottom line, apart from theft or loss of customer or employee data, is the threat to corporate trade secrets and other proprietary information.  While consumer data losses tend to garner headlines, this risk may be even greater.  It is difficult to measure because, in general terms, no law requires a company to disclose the theft of intellectual property.  But there is no doubt that it is staggering in the aggregate.  General Keith B. Alexander, then-director of the National Security Agency, famously stated – all the way back in 2012 – that thefts of proprietary data from public and private networks represent “the greatest transfer of wealth in history.”  In that same speech, he estimated the annual loss to the U.S. economy at $250 billion. 

Trade secrets can take many forms, from proprietary manufacturing processes, to in-pipeline code, to customer lists.  A strong pre-breach defense is important, because there are few tools available to a company once such a breach occurs.  There is no federal trade secret statute that would allow a private company to sue for the theft of its data.  Instead, private parties who have had their data stolen rely on a patchwork of state civil law, usually based on the Uniform Trade Secrets Act.  The federal criminal justice system has available the Economic Espionage Act (18 U.S.C. §§ 1831-39) and the Computer Fraud and Abuse Act (18 U.S.C. § 1030), but that system is often reserved for the most egregious cases, and the government can otherwise be slow to motivate to help a large corporation in what often appears to be a purely civil dispute between companies.   

Even if there were a private right of action, often the theft occurs remotely, and the criminals are located abroad and sponsored by foreign nations.  The United States government for the past year has taken serious efforts to combat the growing and serious threat of nation-state hackers seeking to access, steal, and exploit U.S. trade secrets.   On April 1, 2015, the President announced an Executive Order, “Blocking The Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities,” to help address this threat by arming the government with the authority to impose personal sanctions against individuals working with enemy governments to hack U.S. businesses.  The Order declared a national emergency with respect to the “unusual and extraordinary threat to the national security, foreign policy, and economy of the United States” posed by “the increasing prevalence and severity of malicious cyber-enabled activities originating from, or directed by persons located, in whole or in substantial part, outside the United States.”  A few weeks later, on April 27, 2015, DHS announced that it would open an office in Silicon Valley, in recognition of the significant challenges to technology companies from foreign actors.   

Apart from foreign actors, the risk of hacking from domestic or foreign competitors is also significant.  This has always occurred at multinational companies with the resources to undertake competitive intelligence programs, but the prevalence of technology and the ease of use of some hacking tools has made even small businesses vulnerable.  In December 2015, a New Hampshire linen company – General Linen Services, LLC – pleaded guilty to hacking their local rival’s computer systems to steal “1,100 of their competitor’s invoices for use in sales efforts directed at the competitor’s customers.” 

Many of these foreign actors or competitors gain access through rogue employees.  In conjunction with nation states or competitors, or acting on their own, rogue employees present an acute threat, and a particularly difficult one to uproot because these individuals have legitimate access to the systems.  One of the authors of this article, when he was an Assistant U.S. Attorney, prosecuted a rogue employee of a U.S. defense contractor who had access to the contractor’s library of export-controlled military technical drawings because she was a computer systems analyst.   

There are two main methods of protection of trade secrets and proprietary information: (i) hardening the physical and electronic systems that protect the information, and (ii) actively managing and limiting the amount of confidential information that a company holds. 

As to the first, protection can take many forms.  Beyond passwords, encryption, monitoring, and testing, companies should deploy segmented networks or off-network computers to restrict access to the company’s crown jewels to only those who truly “need to know.”  This segmentation should include restriction from the IT staff themselves.  In so doing, companies should take care to deploy IT and physical security resources wisely so that the state-of-the-art portion of the defense is protecting the proprietary data, and lesser protections are given to general operating data and public information.  Non-tech protections are important as well, particularly including (i) rigorous badging programs that control physical access to portions of the facility, and (i) a renewed focus on the hiring/on-boarding and firing/off-boarding of technical, sales, and IT staff. 

Second, recognizing that there will be breaches if the information is important enough, companies must manage their proprietary processes and data in such a way as to limit their existence.  Companies must regularly audit the location of their crown jewels to purge excess copies.  Systemic deletion is a big part of reducing a secret’s footprint and lowering risk.  As Benjamin Franklin wrote: “Three can keep a secret, if two of them are dead.”   

Restricting third-party vendor access to trade secrets is also key, as non-disclosure agreements are often not worth the paper they are written on, particularly if a vendor has already disseminated, negligently or maliciously, the secret, and if that vendor is undercapitalized such that it cannot make the victim company whole.  Dissemination to a vendor also means trusting that the vendor’s systems are protected at least as well as the owner’s systems are protected, which is often overlooked in the hiring process or addressed with indemnities and insurance rather than inspection or auditing.   

Finally, proprietary processes or products that are patentable should be patented as soon as possible, to avoid loss in the runway phase to stout legal protection.  

B. Business Interruption and Destructive Malware

Some of the earliest threats to corporate computer systems were brute force attacks that were no different from vandalism.  These attacks would shut down the corporate website or change its front page to something embarrassing.  Recent versions of this, however, are far more aggressive, more planned, and more sinister in nature than vandalism. 

The overall threat can be described as a Denial of Service (“DOS”) attack when it originates from a single Internet connection, and a Distributed DOS (“DDOS”) attack when that attack is launched form multiple Internet connections.  These can include attacks that shut down websites, prevent employees from logging into their work networks, or disrupt cell phone or point-to-point communications.  Such attacks are common.  In January 2015, for example, a hacking group took down both the BBC’s website and Donald Trump’s campaign website using a form of a DDOS attack.   

The costs to a company from losing its ability to do business in this way can be significant, as can attempting to reconstruct destroyed or locked records if the attack is permanent in nature.  Besides a good initial defense to prevent such attacks, a company’s resilience to such an attack depends on a company’s contingency planning, the robustness of its disaster recovery and data back-up, and finally, the existence of redundant systems for real-time communication or customer interface. 

A common attack, with a profit motive, is “ransomware.”  In a ransomware attack, the criminal will penetrate a company’s systems and then encrypt the data unless and until the company or individual makes a payment.  The amount sought is usually relatively low, and embarrassed or desperate companies often make the payment.  For example, a hospital in Los Angeles in February 2016 paid $17,000 in Bitcoin digital currency to unlock its systems after a ransomware attack. 

Some attacks have been linked to nation-state actors, and no ransom is offered before systems are destroyed.  The November 2014 attack by North Korea on the computer systems of Sony Pictures included hard drive-erasing malware.   

The federal government has made efforts to fight these attacks, but they have relied in large part on private industry.  As an analogy, there are far fewer policemen then there are residential homes to defend from burglary.  In March 2015, the Federal Financial Institutions Examination Council, the federal interagency body comprised of five banking regulators, including the CFPB, released a Joint Statement on Destructive Malware.  The statement urged banks to prepare for destructive malware attacks as they would for a natural disaster, and then some, warning that such a cyberattack could reach both primary systems and “backup data centers (e.g., mirrored sites)” located elsewhere.

Written by:

Carlton Fields
Contact
more
less

Carlton Fields on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide