This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:
the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.
If you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making tools) in employment, housing, places of public accommodation, credit, and contracting.
Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by NJ employers. According to the survey, 63% of NJ employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include:
any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process…such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
The NJAG guidance examines some ways that AI tools may contribute to discriminatory outcomes.
- Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses which can introduce bias into the automated decision-making tool.
- Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
- Deployment. The NJAG also observed that AI tools could be used to purposely discriminate, or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.
The NJAG notes that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance makes clear that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the EEOC in guidance the federal agency issued under Title VII, even if a third-party was responsible for developing the AI tool. Importantly, under NJ law, this includes disparate treatment/impact which may result from the design or usage of AI tools.
As we have noted, it is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their automated decision-making tools before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well thought out governance strategy for managing this technology can go a long way to minimizing legal risk, particularly as the law develops in this area.