HB Ad Slot
HB Mobile Ad Slot
Artificial Intelligence in Recruitment: It’s Algorithmic, But It May Not Be Private
Monday, August 5, 2024

The past few years have seen a sharp increase in the use of artificial intelligence (AI) across a variety of industries and workplaces. Many businesses have implemented AI to help streamline the recruitment and hiring process in the hopes of making hiring fairer and more efficient. While AI has not been completely successful with respect to eradicating bias across hiring practices as some had hoped, employers might not have thought about how AI manages their employee and applicant data. There are a number of data privacy and security issues that can arise as a result of AI recruitment tools if employers are not vigilant about making sure they are not running afoul of local data privacy laws. 

The very nature of AI is that it “learns” by taking in and processing large amounts of data. This means it must collect and store a large amount of sensitive employee and candidate information. Thus, every time someone inputs data or information to a publicly accessible AI system, there is a risk confidential information will be shared. With the AI environment growing and changing so rapidly, employers are also left to make sense of a patchwork of data privacy laws that dictate how they are allowed to use and store this data. Some states have privacy laws that exempt employee data from regulatory requirements, such as the Virginia Consumer Data Protection Act (VCDPA), where states like California require that businesses provide information of their data-handling practices to candidates through the California Privacy Rights Act (CPRA). If the act now pending in congress known as the American Privacy Rights Act becomes law, it will pull together the patchwork of state laws by requiring employers to provide notice to applicants and employees that AI is being used as well as give them the opportunity to opt out. In the meantime, while this data is essential for recruitment and hiring purposes, employers need to be on the lookout both for security breaches as well as being compliant under applicable laws. 

Navigating Privacy Concerns

Before implementing AI as a tool for candidate assessment, employers need to understand what kind of information it is going to collect. When using outside vendors, inquire what, if any, anti-bias and privacy safeguards are in place. For example, the Americans with Disabilities Act (ADA) generally prevents employers from inquiring about physical or mental disabilities and using that information as factors in hiring; if not careful, employers could find themselves at the end of a lawsuit because their AI software was collecting certain information from candidates and using that information to make decisions about whether or not to advance them. As such, employers must make sure that the AI tools they are using are only collecting essential data and make sure not to share any information with it that they would not otherwise want published to a third party. 

As AI technology continues to rapidly evolve, companies would do well to avoid AI systems that collect and determine “proxy” variables for private or personal attributes for the sake of increased accuracy, even though there is no comprehensive guidance on it yet. Some AI software claim they have the ability to discern a candidates’ sexual orientation through facial recognition or that it can scour a candidate’s social media to infer their race or political affiliation. While many state and federal courts have not yet introduced legislation for analyzing and using such information as it relates to advancing AI technology, employers should err on the side of caution to avoid being exposed to liability for discriminatory hiring practices. 

The Need for Robust Data Protection 

Companies should make sure to implement sophisticated data encryption to help safeguard sensitive information and prevent potential breaches that could put companies on the hook for identity theft or financial losses. AI-powered encryption solutions can be fused with traditional encryption models that can be useful for automatically identifying suspicious data access patterns and thereby tighten security in response. Data protection will also require human oversight to make sure that candidate and employee data is kept secure. Companies should develop and implement clear policies that explain who should routinely have access to employee and candidate data, health histories, and biometric data. This includes providing training on how to use these AI systems and performing routine audits to make sure the tools remain fair.

Takeaways

As with any technology, employers should take comprehensive steps to prevent the disclosure of employee and candidate data when using AI systems. This can involve discussing software options with vendors, limiting the data collected, developing clear policies on how AI should be used, training HR and recruitment teams on how to use these tools while remaining compliant, and making sure they have robust encryption and data protection safeguards. Finally, because the technology of AI is growing so rapidly, companies should remain vigilant for the inevitable legislation regarding privacy and data use and make sure they stay up to date with compliance requirements.

Special thanks to Meredith McDuffie, a summer associate in Foley’s Chicago office, for her contributions to this article.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins