Much attention has been paid, and rightfully so, to the broad-based risks associated with using artificial intelligence (AI) in the hiring and screening process.
But what about smaller-scale AI use?
A new study by Resume Builder that surveyed more than 1,300 U.S. managers with direct reports revealed striking information about managers use of AI in connection with individual performance management and personnel decisions. As reported by the study:
- 6 in 10 managers rely on AI to make decisions about their direct reports.
- Most of the managers use AI to determine raises (78%), promotions (77%), layoffs (66%), and even terminations (64%).
- More than 1 in 5 frequently let AI make final decisions without human input.
- Two-thirds of managers using AI to help manage employees haven’t received any formal AI training.
That doesn’t even include managerial use of AI to build employee development plans (94%, according to the same study), assess employee performance (91%), and draft performance improvement plans (88%).
That AI is already playing such a major role in so many junctures and decision points in the employment lifecycle is striking in itself. Couple that with the fact that some managers are relying on AI completely, and that the majority of them haven’t been trained in how to properly use AI, and the risks are apparent.
Confidentiality and data privacy concerns aside, a manager’s use of AI in making personnel decisions increases a company’s exposure to employee lawsuits challenging those decisions.
At a fundamental level, employers are obligated to protect their employees from discrimination at work by a variety of federal, state, and local laws. If an employee challenges an adverse personnel decision in court (termination, denial of promotion, etc.), employers must be able to articulate the legitimate business reason(s) why the decision in question was made. From there, evidence is gathered during the discovery process to determine the actual reason for the decision—i.e., whether it was motivated by discrimination or the legitimate explanation provided by the employer.
Therein lies the problem for employers. The “black box” nature of many AI systems makes it, at best, difficult for an employer to fully explain how a particular decision was made (or why a particular performance review was written the way it was). In a discrimination lawsuit, this may leave an employer unable to explain or defend how AI generated–or influenced–a particular outcome, making it more difficult to demonstrate that a decision was made based on lawful criteria. Because they are less transparent, decisions influenced by AI may be more vulnerable to attack.
The compounding concern is that, as a general matter, AI systems can learn from the data they are trained on, and if that data contains historical or systemic biases, there is a risk that the AI will replicate or amplify them. Employers do not want their managers relying on tools that could be compromised by bias.
So, what should employers do? Establishing a policy with clear guardrails around the use of AI in performance management is critical. At a minimum, that policy should:
- Identify AI platforms that are approved for use. Make sure those platforms are appropriately vetted beforehand.
- Require managers to participate in trainings on the ethical and legal considerations when using AI. With the rapid expansion of AI capabilities and use, this will become increasingly important over time.
- Prohibit any confidential information from being entered into AI platforms without explicit approval.
- Require that AI users independently verify the output of an AI system before relying on it.
- Mandate manager notice to and approval from HR if AI was used in connection with personnel decisions or performance management steps.
Beyond these fundamental considerations, laws and regulatory guidance (at the federal, state, and local levels) are only going to increase, with each law posing its own unique compliance requirements. Employers should consult with their attorneys to develop policies, practices, and agreements that protect their interests.