HB Ad Slot
HB Mobile Ad Slot
Employment Issues in Generative AI
Friday, June 30, 2023

The second webinar in our series, “Employment Issues in Generative AI,” explored the evolving impact of generative AI (or “GAI”) on the workplace and how employers can work to ensure the ethical and responsible use of AI applications and recognize and navigate potential legal issues from existing anti-discrimination and other laws and regulations.  The presenters also offered a list of do’s and don’ts and outlined how employers should develop an AI strategy and policy and how to avoid common pitfalls in AI implementation.

In the speakers’ views, companies should not look at GAI as a threat but rather as an opportunity that must be managed carefully and leveraged to benefit the organization and its employees. GAI tools can help with productivity and improve employee training and career development, but the benefits must be balanced with the risks. As the speakers noted, GAI is a new technology that may not wholly be understood by users and currently has some inherent weaknesses related to transparency (i.e., users do not have complete knowledge about the datasets and user inputs used to train a particular GAI system) and accountability (e.g., the known problem of current GAI tools producing erroneous results or so-called “hallucinations”).

Overall, the speakers stressed that AI should assist, not replace, human decision-making and that organizations should not rush implementation but instead carefully identify specific tasks or processes that would benefit from automation and augmentation.  They also reiterated that organizations need to continually monitor and evaluate the performance of GAI tools, address bias and fairness concerns, and actively work to minimize the potential for biased decision-making by GAI tools.  As with any emerging technology, keeping track of the evolving legal landscape is essential.

While the use of AI and generative AI applications in the employment arena is at a nascent stage, the technologies are increasingly being used for a wide array of functions.  Examples of such uses include drafting and review of job postings (and even helping broaden the reach of job posts to a broader, more diverse applicant pool), screening resumés, conducting or scoring applicant interviews, predicting staff attrition, managing employees, facilitating the onboarding process, and creating various administrative efficiencies for routine office document review and production. 

According to the presenters, the time for employers to plan for GAI usage is now since job applicants and employees already use it, and organizations must maintain a competitive edge.  Among other things, the presenters suggested that organizations appoint a “GAI Lead” and related steering committee to take charge of AI integration and identify which tasks are suitable for automation and which are not. The risk-benefit analysis weighs the cost of AI implementation, legal risks, and the unknown versus the benefits of better productivity, reduced costs, and employee satisfaction.

Algorithmic Bias

One of the major concerns for employers using AI tools is algorithmic bias, which can occurwhen AI tools unintentionally produce unequal or prejudiced outcomes due to their reliance on biased, inaccurate, or discriminatory datasets, thus perpetuating existing disparities and compromising fairness. Understanding the potential for algorithmic bias is vital to robust legal compliance with existing anti-discrimination laws like   Title VII of the Civil Rights Act and the Americans with Disabilities Act (ADA). In these areas, the presenters stressed that employers should think about whether their employment decisions aided by AI tools are infected by algorithmic bias, which may turn to the extent an organization relies on GAI tools to make employment decisions and to what extent are these AI tools being vetted and tested by the vendor and the employer to minimize risk for algorithmic bias.  Employers must also be focused on what accommodations may be needed for certain applicants to ensure AI tools are not screening out job applicants who may score less on a certain analysis based on a disability or other factor that limits the ability to perform the “test” of the AI tool (similar considerations apply for potential age discrimination claims).  

The presenters also highlighted the potential for violation of other laws like the Fair Credit Reporting Act (FCRA), which requires employers to get specific consent before running background checks for employment purposes using a third-party “consumer reporting agency.  While the presenters suggested there were many arguments against a finding that a GAI tool in and of itself would become a “consumer reporting agency,” the answer could vary depending on the facts and circumstances, the nature of the tool, and the purpose for which it was used, among other things.

Beyond existing federal law, the presenters highlighted a “groundbreaking” local New York City automated employment decision tool (AEDT) law that will become effective on July 5, 2023. The new law regulates the use of certain AI tools that substantially assist in making employment-related decisions in hiring and promotion. One primary requirement of the law is that an AEDT that fits the statutory definition (and accompanying regulations) must be audited before being used, with such results being made public (and the AEDT undergoing annual audits after that).

Final Takeaways

The presenters highlighted the importance of developing a formal AI policy, as merely delegating employment decisions to AI will not insulate you from potential liability under anti-discrimination and other laws. A comprehensive AI policy concerning employee use of AI in the workplace should outline company expectations, best practices, and limitations to AI usage and should attempt to:

  • Manage job applicant use of GAI by addressing GAI usage in job application materials and establishing policies for GAI usage by applicants.

  • Define employee training obligations that highlight GAI risks (erroneous output and confidentiality considerations) and include a list of approved workplace tools and uses.

  • Develop a system for labeling content created by GAI.

  • Establish an effective oversight and reporting mechanism.

  • Make a clear statement as to potential consequences for policy violation.

  • Develop a regular policy review schedule and amend it as needed based on new legal and regulatory developments.


WebinarA Series Focused on Legal Issues Associated with Emerging Generative Artificial Intelligence

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins