Recent high-profile data breaches have placed security of personally identifiable information (PII) at the forefront of many organizations’ concerns. Protecting PII and other private data can be a significant undertaking. Legal and IT departments must work together to accomplish this goal. Understanding how to identify the various sources of information breaches and types of storage areas can be daunting; companies increasingly adopt new technologies and tools that spread PII across their organization and beyond. Recently, the Electronic Discovery Reference Model (EDRM) released a model for systematically identifying and reducing the risks associated with moving protected data.
The EDRM is widely viewed by the e-discovery industry as one of the leading developers of standards for various e-discovery issues. As such, the EDRM’s data group created the Privacy and Security Risk Reduction Model (PSRRM) to provide organizations with a process for mitigating the risks associated with producing or exporting private and protected information.
While the PSRRM mainly focuses on the production and export of private data, the process nonetheless may serve as a useful exercise for all organizations that store this type of data.
The process consists of six steps:
According to the PSRRM, risk is “initially identified by an organization by stakeholders who can quantify the specific risks a particular class or type of data may pose.” Different types of private or protected data carry different risks. For example, an organization may have social security numbers, names, addresses, and dates of birth, or it may have credit card and other financial information. Depending on the type of data maintained, the potential risks could range from financial fraud (including account takeover) to insurance and/or medical fraud.
Identify Available Data
While it may sound obvious, organizations must be able to account for all sources of protected data within their enterprise. The scope of such a process can be challenging at first, but organizations should consider the following as available sources of protected data: email repositories, file shares, workstations and laptops, “cloud” sources such as Google email and Google Drive, databases, and even smartphones and tablets.
Once the various holding places have been identified, the PSRRM suggests creating filters to “catch” private data. Filters can include keywords, date ranges, file types, and even more granular criteria such as social security numbers.
It may seem obvious that the next step would be to execute the filters to identify and catch protected data, but the important point here is to evaluate the results for accuracy and completeness. Data is not created equally; a filter that works on a database most likely will not provide the same results against an email repository. This step requires legal teams to work in concert with IT personnel to best understand and evaluate the accuracy of the results.
As noted above, results of the filters should be compared against the anticipated result. If, for example, an organization knows it maintains social security numbers for customers but the results do not account for that data, then additional filters would be necessary. Moreover, as the PSRRM notes, the output from the filters may identify additional risky data or data sources, in which case the new data should be subjected to the risk-reduction process.
Quarantine or Disposition
Protecting PII and private data necessarily includes segregating sensitive data from other less-protected types of information. There are several options for quarantining private data, including migration to more secure servers, extraction, and in some cases, even deletion. We often counsel clients on the risks associated with trivial, redundant, and obsolete data; to the extent that private data falls within one of these areas, deletion may be the appropriate route. The goal however is the same: Segregate private data from less-secure data within the organization.
The goal of this exercise, as the model illustrates, is to reduce the risk of exposing protected data. Moreover, the process is iterative rather than linear, making constant refinement and improvement of procedures essential aspects of the model.