As we enter 2025, the rapid growth of artificial intelligence (AI) presents both transformative opportunities and pressing legal challenges, particularly in the workplace.
Employers must navigate an increasingly complex regulatory landscape to ensure compliance and avoid liability. With several states proposing AI regulations that would impact hiring practices and other employment decisions, it is critical for employers to stay ahead of these developments.
New York
New York’s proposed legislation, which if passed would become effective January 1, 2027, provides guardrails to New York employers implementing AI to assist in hiring, promoting, or making other decisions pertaining to employment opportunities. Unlike New York City Local Law 144, which covers only certain employment decisions, the New York Artificial Intelligence Consumer Protection Act (“NY AICPA”), A 768, takes a risk-based approach to AI regulation, much like that of Colorado’s SB 24-205. The NY AICPA would specifically regulate all “consequential decisions” made by AI, including those having a “material legal or similarly significant effect” on any “employment or employment opportunity.” The bill imposes compliance obligations on “developers” and “deployers” of high-risk AI decision systems.
If passed, NY AICPA would require developers to:
- Use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. This would include undertaking bias and governance audits by an independent third-party auditor qualified by the State’s Attorney General.
- Make available to deployers and other relevant developers documentation describing the intended uses and benefits, the known harmful or inappropriate uses, the governance parameters, the training data, and the expected outputs of the AI system.
- Publish a statement summarizing the types of high-risk AI decision systems it has developed or modified, currently makes available to others, and how it manages risks of algorithmic discrimination.
NY AICPA imposes similar requirements on deployers (which would include employers using AI systems to aid in employment decision-making). Additionally, deployers must:
- Implement a risk management policy and program to govern the use of high-risk AI decision systems, which will be evaluated based on NIST’s current AI Risk Management Framework or some similar risk management framework; the size and complexity of the deployer, the nature and scope of the AI deployed; and the sensitivity and volume of data processed in connection with the AI system.
- Complete an impact assessment, at least annually and within 90 days after an intentional and substantial modification, of the AI system.
- Publish on its website a statement summarizing the types of high-risk AI decision systems used by the deployer; how the deployer manages risks of algorithmic discrimination; and the nature, source, and extent of the information collected and used by the deployer.
- When using the AI system to make, or be a substantial factor in making, a consequential decision concerning an individual, (i) notify the consumer of the use of the AI system; (ii) provide the consumer with a statement disclosing the purpose of the AI system and nature of the consequential decision, contact information for the deployer, a plain-language description of the AI system, and where to access the website statement summarizing its AI use.
- If the deployer uses the AI system to make an adverse consequential decision, disclose to the consumer the principle reason for reaching that decision, and provide an opportunity to correct any “incorrect personal data” that the AI system processed in making the decision and an opportunity to appeal the decision.
Deployers/employers, however, can contract with developers to bear many of these compliance obligations if certain conditions are met.
The impact assessments required by NY AICPA would analyze the reasonably foreseeable risks of algorithmic discrimination and identify steps to mitigate these risks. These audits would specifically evaluate whether the AI system disproportionately affects certain groups based on protected characteristics. If the audit identifies biases in the AI, the employer would have to engage in corrective actions, including training the system to recognize and avoid discriminatory patterns. If the AI system plays a significant role in making an employment decision, such as hiring or firing, employers must be prepared to justify the decision and offer an employee the opportunity to appeal the decision, among other things.
The MIT Technology Review also reports that New York Assemblymember Alex Bores has drafted a yet-to-be-released Responsible AI Safety and Education Act (“RAISE Act”), inspired by an unsuccessful California bill (SB 1047), requiring developers to establish safety plans and assessment models for AI systems. From an employment perspective, the RAISE Act would shield any whistleblowers in AI companies from retaliation who share information about any problematic AI model. If it follows in similar fashion to SB 1047, the RAISE Act may require covered entities to submit a statement of compliance to the state’s Attorney General within 30 days of use of relevant AI systems.
Also pending in New York state are Senate Bill S7623A and Assembly Bill A9315. Both bills would require employers to conduct impact assessments and provide written notice to employees when used. If passed, both laws would specifically limit employers’ use and consequences of employee data collected via AI systems and monitoring.
Massachusetts
If passed, Massachusetts’ proposed Artificial Intelligence Accountability and Consumer Protection Act (“MA AIACPA”), HD 396, also would regulate high-risk AI systems. MA AIACPA imposes similar obligations on developers and deployers, including the requirements of maintaining risk management programs and conducting impact assessments.
Deployers, including employers, must notify consumers when an AI system materially influences a consequential decision. As part of this notification, employers are required to provide a statement on the purpose of the AI system, an explanation of how AI influenced the decisions, and a process to appeal the decision.
Any corporation operating in the state that uses AI to target specific consumer groups or influence behavior must disclose the methods, purposes, and context in which the AI is used, the ways in which the AI systems are designed to influence consumer behavior, and the details of any third-party entities involved. This public corporate disclosure statement must be available on the website and included in any terms and conditions provided to consumers prior to significant interaction with an AI system. Specifically, corporations must notify individuals when AI targets or materially influences their decisions, and when using algorithms to determine pricing, eligibility, or access to services.
New Mexico
New Mexico’s proposed Artificial Intelligence Act, HB 60, also takes the risk-based approach to AI regulation. Like the bills in New York and Massachusetts, the New Mexico Artificial Intelligence Act contains requirements for both developers and deployers, including the maintenance of a risk management policy addressing the known or reasonably foreseeable risk of algorithmic discrimination, conducting impact assessments at regular intervals, and publishing a notice on their website summarizing the AI systems used. If it passes, the Artificial Intelligence Act will become effective July 1, 2026.
Virginia
The Virginia High Risk Artificial Intelligence Developer and Deployer Act, HB 2094, would create operating standards for developers and deployers of high-risk AI systems. Designed to protect against the risks of algorithmic discrimination, these operating standards largely track the proposed legislation in other states. If passed, the act will go into effect on July 1, 2026.
Texas
In the final days of 2024, Texas introduced the Texas Responsible AI Governance Act (TRAIGA), which, like other states, would regulate the use of AI by requiring: (1) the creation of a risk identification and management policy; (2) semi-annual impact assessments; (3) disclosure and analysis of risk; (4) a description of transparency measures, and (5) human oversight in certain instances. The Texas bill would take effect on September 1, 2025.
Connecticut
Connecticut’s S.B. 2, while currently stalled, is expected to be re-introduced in 2025. If passed into law, Connecticut employers would need to implement protocols to protect against algorithmic discrimination, conduct impact assessments, and notify employees. Employers that use off-the-shelf AI would not have to ensure the AI product is non-discriminatory, as long as the product is used as intended.
What Employers Should Do Now
As can be seen by the similarities in proposed legislation across the country, a common theme has developed with respect to AI regulation – developers and deployers must implement an AI governance plan aimed to identify and reduce the risk of algorithmic discrimination and ensure ongoing monitoring of the AI system or tools. Although these bills are still pending, employers should commence development of comprehensive AI governance strategies. This proactive approach not only ensures regulatory readiness but demonstrates an organization’s commitment to ethical and responsible AI use, which are important considerations for stakeholders and enforcement agencies alike.