Last month, the White House released a comprehensive report on the use of “big data” in the public and private sectors. Employers should pay particular attention to one of its central forecasts: the EEOC and other federal antidiscrimination agencies may begin scrutinizing how employers collect and use big data in managing their workforces.
The concept of “big data” is difficult to define. The report observed that big data generally “reflects the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.” “Big data” describes the process by which an entity gathers massive amounts of information from social media, the internet, and other (typically electronic) sources. Websites use big data to deliver user-specific advertisements. Medical researchers and healthcare providers use it to develop targeted disease prevention methods. Financial institutions use it to better detect cyber fraud. The CIA even used big data to track down Osama Bin Laden.
For employers, big data collection can have major benefits. It allows employers to assess the characteristics of their workforces in unprecedented detail and detect trends that, until very recently, were analytically invisible. The insights an employer gleans from that analysis may challenge long-held assumptions about the best ways to hire, promote, and fire. For example, which applicants help decrease employee attrition? What perks attract the best talent? What personality traits jive best with my organization’s culture? Big data can offer incisive—and often surprising—answers, which many employers are beginning to use to recalibrate their approach to human resources.
But big data’s benefits may come with a risk. While acknowledging its benefits, the White House report concluded that “big data technologies can [also] cause societal harms . . . such as discrimination against individuals and groups.” Such discrimination, the report concluded, “can be the inadvertent outcome of the way big data technologies are structured and used” or, in some cases, “the result of intent to prey on vulnerable classes.”
The report gave a telling example for employers of the potential for big data to cause inadvertently discriminatory outcomes: databases designed to verify an individual’s identity. “People who have multiple surnames and women who change their names when they marry,”
the report concluded, “typically encounter higher rates of error” when an employer checks their identities. Although it declined to elaborate on precisely which protected classes or characteristics might be implicated, the report identified that disparity as a case of “potential discrimination.” Ultimately, the report recommended that that the Department of Justice, the EEOC, and other federal antidiscrimination agencies “expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law in such cases.”
Will big data collection lead to unlawful discrimination claims? Maybe. Claims based on a theory of disparate impact—i.e., a neutral employment practice that has an unjustifiable impact on members of a particular protected class—may apply to data-driven employment practices. But there are limits to the EEOC’s reach. The Sixth Circuit recently threw out a case in which the EEOC challenged a major employer’s use of credit checks on a theory of disparate impact. The Sixth Circuit concluded that the EEOC’s theory was based on “a homemade methodology, crafted by a witness with no particular expertise to craft it, administered by persons with no particular expertise to administer it, tested by no one, and accepted only by the witness himself.”
As employers begin to incorporate big data into their human resources functions, they should keep several key concepts in mind:
Back up new insights with business justifications. The central benefit of big data concerns its potential to reveal powerful new criteria with which employers can hire, promote, and fire more effectively. By virtue of their newness, though, many of those criteria will be legally untested in the context of state and federal discrimination law; some might arguably implicate protected classes or characteristics. When an employer discovers a new, data-driven hiring criterion, it should carefully assess and document the neutral business justification underlying the criterion.
Where is my employee data coming from? Garbage in, garbage out: if an employer derives its data from a potentially discriminatory source, its use may lead to discriminatory outcomes. The White House report gave a good example that pitfall. The city of Boston developed a smart phone app that would detect potholes whenever a smart phone-carrying driver drove over one. The app sought to improve the delivery of public works services; the more Boston knows about where its (many, many) potholes are located, the more quickly it can send a truck to repair them. However, as the app’s developers realized, the poor and elderly are comparatively less likely to carry smart phones. Thus, if the city relied solely on personal smartphone data, its improvements in service delivery would be concentrated in richer, younger neighborhoods. (The city eventually deployed the app only to public street inspectors who examine each part of the city equally.) Similarly, when incorporating big data into their human resources functions, employers should carefully consider the source of the data and assess whether it might lead to a discriminatory outcome.
Keep the “human” in human resources. For all its potential, big data will never displace the value of a face-to-face interview. To mitigate the risk of a discrimination claim, consider providing parallel, non-data-driven opportunities to applicants and employees who might lack the qualifications or traits that big data deems desirable. In other words, let big data inform your decision-making, but be cautious about letting it dictate a particular outcome.