The New Jersey AG and the Division on Civil Rights’ new guidance on algorithmic discrimination explains how AI tools might be used in ways that violate the New Jersey Law Against Discrimination. The law applies to employers in New Jersey, and some of its requirements overlap with new state “comprehensive” privacy laws. In particular, those laws’ requirements on automated decisionmaking. Those laws, however, typically do not apply in an employment context (with the exception of California). This New Jersey guidance (which mirrors what we are seeing in other states) is a reminder that privacy practitioners should keep in mind AI discrimination beyond the consumer context.
The division released the guidance last month (as reported in our sister blog) to assist businesses as they vet automated decision-making tools. In particular, to avoid unfair bias against protected characteristics like sex, race, religion, and military service. The guidance clarifies that the law prohibits “algorithmic discrimination,” which occurs when artificial intelligence (or an “automated decision-making tool”) creates biased outcomes based on protected characteristics. Key takeaways about the division’s position, as articulated in the guidance, are listed below, and can be added to practitioners’ growing rubric of requirements under the patchwork of privacy laws:
- The design, training, or deployment of AI tools can lead to discriminatory outcomes. For example, the design of an AI tool may skew its decisions, or its decisions may be based on biased inputs. Similarly, data used to train tools may incorporate the developers bias and reflect those biases in their outcomes. When a business deploys a new tool incorrectly, whether intentionally or unintentionally, the outcomes can create an iterative bias.
- The mechanism or type of discrimination does not matter when it comes to liability. Whether discrimination occurs through a human being or through automated tools is immaterial when it comes to liability, according to the guidance. The division’s position is if the covered entity discriminates, they have violated the NJLAD. Additionally, the type of discrimination, whether disparate or intentional, does not matter. Importantly, if an employer uses an AI tool that disproportionately impacts a protected group, then they could be liable.
- AI tools might not consider reasonable accommodations and thus could result in a discriminatory outcome. The guidance points to specific incidents that could impact employers and employees. An AI tool that measures productivity may flag for discipline an individual who has timing accommodations due to a disability or a person who needs time to express breast milk. Without taking these factors into account, the result could be discriminatory.
- Businesses are liable for algorithmic discrimination even if the business did not develop the tool or does not understand how it works. Given this position, employers, and other covered entities, need to understand the AI tools and automated decision-making processes and regularly assess the outcomes after deployment.
- Steps businesses, and employers, can take to mitigate risk. The guidance recommends that there be quality control measures in place for the design, training, and deployment of any AI tools. Businesses should also conduct impact assessments and regular bias audits (both pre- and post- deployment). Employers and covered entities should provide notice about the use of automated decision-making tools.
Putting it into Practice: This new guidance may foreshadow a focus by the New Jersey division on employer use of AI tools. New Jersey is not the only state to contemplate AI use in the employment context. Illinois amended its employment law last year to address algorithmic bias in employment decisions. Privacy practitioners should not forget about these employment laws when developing their privacy requirements rubrics.
James O'Reilly also contributed to this article.