Artificial intelligence tools are fundamentally changing how people work. Tasks that used to be painstaking and time-consuming are now able to be completed in real-time with the assistance of AI.
Many organizations have sought to leverage the benefits of AI in various ways. An organization, for instance, can use AI to screen resumes and identify which candidates are likely to be the most qualified. The organization can also use AI to predict which employees are likely to leave the organization so retention efforts can be implemented.
One AI use that is quickly gaining popularity is performance management of employees. An organization could use AI to summarize internal data and feedback on employees to create performance summaries for managers to review. By constantly collecting this data, the AI tool can help ensure that work achievements or issues are captured in real-time and presented effectively on demand. This can also help facilitate more frequent touchpoints for employee feedback—with less administrative burden—so that organizations can focus more on having meaningful conversations with employees about the feedback they receive and recommended areas of improvement.
While the benefits of using AI have been well publicized, its potential pitfalls have attracted just as much publicity. The use of AI tools in performance management can expose organizations to significant privacy and security risks, which need to be managed through comprehensive policies and procedures.
Potential Risks
- Accuracy of information. AI tools have been known to create outputs that are nonsensical or simply inaccurate, commonly referred to as “AI hallucinations.” Rather than solely relying on the outputs provided by an AI tool, an organization should ensure it independently verifies the accuracy of the outputs provided by the AI tool. Inaccurate statements in an employee’s performance evaluation, for instance, could expose the organization to significant liability.
- Bias and discrimination. AI tools are trained using historical data from various sources, which can inadvertently perpetuate biases existing in that data. In a joint statement issued by several federal agencies, the agencies highlighted that the datasets used to train AI tools could be unrepresentative, incorporate historical bias, or correlate data with protected classes, which could lead to a discriminatory outcome. A recent experiment conducted with ChatGPT illustrated how these embedded biases can manifest in the performance management context.
- Compliance with legal obligations. In recent years, legislatures at the federal, state, and local levels have prioritized AI regulation in order to protect individuals’ privacy and secure data. Last year, New York City’s AI law took effect requiring employers to conduct bias audits before using AI tools in employment decisions. Other jurisdictions—including California, New Jersey, New York, and Washington D.C.—have proposed similar bias audit legislation. In addition, Vermont introduced legislation that would prohibit employers from relying solely on information from AI tools when making employment-related decisions. As more jurisdictions become active with AI regulation, organizations should remain mindful of their obligations under applicable laws.
Mitigation Strategies
- Conduct employee training. Organizations should ensure all employees are trained on the use of AI tools in accordance with organization policy. This training should include information on the potential benefits and risks associated with AI tools, organization policies concerning these tools, and the operation and use of approved AI tools.
- Examine issues related to bias. To help minimize risks related to bias in AI tools, organizations should carefully review the data and algorithms used in their performance management platforms. Organizations should also explore what steps, if any, the AI-tool vendor took to successfully reduce bias in employment decisions.
- Develop policies and procedures to govern AI use. To comply with applicable data privacy and security laws, an organization should ensure that it has policies and procedures in place to regulate how AI is used in the organization, who has access to the outputs, to whom the outputs are shared, where the outputs are stored, and how long the outputs are kept. Each of these important considerations will vary across organizations, so it is critical that the organization develops a deeper understanding of the AI tools sought to be implemented.
For organizations seeking to use AI for performance management of employees, it is important to be mindful of the risks associated with AI use. Most of these risks can be mitigated, but it will require organizations to be proactive in managing their data privacy and security risks.