HB Ad Slot
HB Mobile Ad Slot
Q1 2025 New York Artificial Intelligence Developments: What Employers Should Know About Proposed and Passed Artificial Intelligence Legislation
Tuesday, April 22, 2025

In the first part of 2025, New York joined other states, such as Colorado, Connecticut, New Jersey, and Texas,seeking to regulate artificial intelligence (AI) at the state level. Specifically, on 8 January 2025, bills focused on the use of AI decision-making tools were introduced in both the New York Senate and State Assembly. As discussed further below, the New York AI Act Bill S01169(the NY AI Act) focuses on addressing algorithmic discrimination by regulating and restricting the use of certain AI systems, including in employment. The NY AI Act would allow for a right of private action, empowering citizens to bring lawsuits against technology companies. Additionally, the New York AI Consumer Protection Act Bill A00768(the Protection Act) would amend the general business law to prevent the use of AI algorithms to discriminate against protected classes, including in employment. 

This alert discusses these two pieces of legislation and provides recommendations for employers as they navigate the patchwork of proposed and enacted AI legislation and federal guidance.

Senate Bill 1169

On 8 January 2025, New York State Senator Kristen Gonzalez introduced the NY AI Act because “[a] growing body of research shows that AI systems that are deployed without adequate testing, sufficient oversight, and robust guardrails can harm consumers and deny historically disadvantaged groups the full measure of their civil rights and liberties, thereby further entrenching inequalities.” The NY AI Act would cover all “consumers,” defined as any New York state resident, including residents who are employees and employers.The NY AI Act states that “[t]he legislature must act to ensure that all uses of AI, especially those that affect important life chances, are free from harmful biases, protect our privacy, and work for the public good.”It further asserts that, as the “home to thousands of technology start-ups,” including those that experiment with AI, New York must prioritize safe innovation in the AI sector by providing clear guidance for AI development, testing, and validation both before a product is launched and throughout the product’s life.

Setting the NY AI Act apart from other proposed and enacted state AI laws,the NY AI Act includes a private right of action allowing New York state residents to file claims against technology companies for violations. The NY AI Act also provides for enforcement by the state’s attorney general. In addition, under the proposed law, consumers have the right to opt out of automated decision-making or appeal its results. 

The NY AI Act defines “algorithmic discrimination” as any condition in which the use of an AI system contributes to unjustified differential treatment or impacts, disfavoring people based on their actual or perceived age, race, ethnicity, creed, religion, color, national origin, citizenship or immigration status, sexual orientation, gender identity, gender expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, pregnancy, pregnancy outcomes, disability, height, weight, reproductive health care or autonomy, status as a victim of domestic violence, or other classification protected under state or federal laws.

The NY AI Act demands that “deployers” using a high-risk AI systemfor a consequential decision10 comply with certain obligations. “Deployers” is defined as “any person doing business in [New York] state that deploys a high-risk artificial intelligence decision system.”11 This includes New York employers. For instance, deployers must disclose to the end user in clear, conspicuous, and consumer-friendly terms that they are using an AI system that makes consequential decisions at least five business days prior to the use of such system. The deployer must allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for a human representative to make the decision. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer must render a decision to the consumer within 45 days.12 

Further, any deployer that employs a high-risk AI system for a consequential decision must inform the end user within five days in a clear, conspicuous, and consumer-friendly manner if a consequential decision has been made entirely by or with assistance of an automated system. The deployer must then provide and explain a process for the end user to appeal the decision, which must at minimum allow the end user to (a) formally contest the decision, (b) provide information to support their position, and (c) obtain meaningful human review of the decision.13 

Additionally, deployers must complete an audit before using a high-risk AI system, six months after deployment, and at least every 18 months thereafter for each calendar year a high-risk AI system is in use. Regardless of final findings, the deployers shall deliver all audits conducted to the attorney general.

As mentioned above, enforcement is permitted by the attorney general or a private right of action by consumer citizens. If a violation occurs, the attorney general may request an injunction to enjoin and restrain the continuance of the violation.14 Whenever the court shall determine that a violation occurred, the court may impose a civil penalty of not more than US$20,000 for each violation. Further, there shall be a private right of action for any person harmed by any violation of the NY AI Act. The court shall award compensatory damages and legal fees to the prevailing party.15 

The NY AI Act also offers whistleblower protections, prohibits social scoring AI systems, and prohibits waiving legal rights.16 

Assembly Bill 768

Also on 8 January 2025, New York State Assembly Member Alex Bores introduced the Protection Act. Like the NY AI Act, the Protection Act seeks to prevent the use of AI algorithms to discriminate against protected classes. 

The Protection Act defines “algorithmic discrimination” as any condition in which the use of an AI decision system results in any unlawful differential treatment or impact that disfavors any individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, English language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected pursuant to state or federal law.17 

The Protection Act requires a “bias and governance audit” consisting of an impartial evaluation by an independent auditor, which shall include, at a minimum, the testing of an AI decision system to assess such system’s disparate impact on employees because of such employee’s age, race, creed, color, ethnicity, national origin, disability, citizenship or immigration status, marital or familial status, military status, religion, or sex, including sexual orientation, gender identity, gender expression, pregnancy, pregnancy outcomes, and reproductive healthcare choices.18 

If enacted, beginning 1 January 2027, the Protection Act would require each deployer of a high-risk AI decision system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.19 Specifically, deployers would be required to implement and maintain a risk management policy and program that is regularly reviewed and updated. The Protection Act references external sources employers can look to for guidance and compliance, such as the “AI Risk Management Framework” published by the National Institute of Standards and Technology and the ISO/IEC 42001 of the International Organization for Standardization.20

On 1 January 2027, employers deploying a high-risk AI decision system that makes or is a substantial factor in making a consequential decision concerning a consumer would also have to:

  • Notify the consumer that the deployer has used a high-risk AI decision system to make, or be a substantial factor in making, a consequential decision.
  • Provide to the consumer a statement disclosing: (I) the purpose of the high-risk AI decision system; and (II) the nature of the consequential decision.21 
  • Make available a statement summarizing the types of high-risk AI decision systems that are currently used by the deployer.
  • Explain how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination. 
  • Notify the consumer of the nature, source, and extent of the information collected and used by the deployer.22

New York City Council Local Law Int. No. 1894-A

While the NY AI Act and Protection Act are not yet enacted, New York City employers should ensure they are following Local Law Int. No. 1984-A (the NYC AI Law), which became effective on 5 July 2023. The NYC AI Law aims at protecting job candidates and employees from unlawful discriminatory bias based on race, ethnicity, or sex when employers and employment agencies use automated employment decision-making tools (AEDTs) as part of employment decisions. 

Compared to the proposed state laws, the NYC AI Law narrowly applies to employers and employment agencies in New York City that use AEDTs to screen candidates or employees for positions located in the city. Similar to the proposed state legislation, bias audits and notice are required whenever an AEDT is used. Notice must be provided to candidates and employees of the use of AEDTs at least 10 business days in advance. Under the NYC AI Law, an AEDT is:

[A]ny computational process, derived from machine learning, statistical modeling, data analytics, or [AI], that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.

The NYC AI Law demands audits be completed by an independent auditor who details the sources of data (testing or historical) used in the audit. The results of the bias audit must be published on the website of employers and employment agencies, or an active hyperlink to a website with this information must be provided, for at least six months after the latest use of the AEDT for an employment decision. The summary of results must include (i) the date of the most recent bias audit of the AEDT; (ii) the source and explanation of the data; (iii) the number of individuals the AEDT assessed that fall within an unknown category; and (iv) the number of applicants or candidates, the selection or scoring rates, as applicable, and the impact ratios for all categories.23 The penalties for noncompliance with the NYC AI Law include penalties of US$500 to US$1,500 per violation, and there is no cap on the civil penalties. Further, the NYC AI Law authorizes a private right of action, in court or through administrative agencies, for aggrieved candidates and employees.

Takeaways for Employers 

Employers should work to be in compliance with the existing NYC AI Law and prepare for future state legislation.24 

Employers should: 

  • Assess AI Systems: Identify any AI systems your company develops or deploys, particularly those used in consequential decisions related to employment.
  • Review Data Management Policies: Ensure your data management policies comply with data security protection standards.
  • Prepare for Audits: Familiarize yourself with the audit requirements and begin preparing for potential audits of high-risk AI systems.
  • Develop Internal Processes: Establish internal processes for employee disclosures related to AI system violations.
  • Monitor Legislation: Stay informed about proposed bills, such as AB326525 and AB3356,26 and continually review guidance from federal agencies. 

Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of concerns related to emerging issues in labor, employment, and workplace safety law and are well-positioned to provide guidance and assistance to clients on AI developments.

Footnotes

Please see the following alert for more information on the proposed Texas legislation: Kathleen D. Parker, et al., The Texas Responsible AI Governance Act and Its Potential Impact on Employers, K&L GATES HUB (Jan. 13, 2025), https://www.klgates.com/The-Texas-Responsible-AI-Governance-Act-and-Its-Potential-Impact-on-Employers-1-13-2025.

S. 1169, 2025-2026 Gen. Assemb., Reg. Sess., § 85 (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/S1169.

A.B. 768, 2025-2026 Gen. Assemb., Reg. Sess., § 1550 (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A768.

S. 1169, supra note 2, § 85.

5Id. § 2(b).

6Id. § 2(c).

Please see the following alert for more information on state AI laws: Michael J. Stortz, et al., Litigation Minute: State Generative AI Statutes and the Private Right of Action, K&L GATES HUB (Jun. 17, 2024), https://www.klgates.com/Litigation-Minute-State-Statutes-and-the-Private-Right-of-Action-6-17-2024

8 S. 1169, supra note 2. § 85(1).

Id. § 85(12) “High-Risk AI System” means any AI system that, when deployed: (A) is a substantial factor in making a consequential decision; or (B) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.

10 Id. § 85(4) “Consequential Decision” means a decision or judgment that has a material, legal or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of, any of the following: (A) employment, workers’ management, or self-employment, including,
 but not limited to, all of the following: (i) pay or promotion; (ii) hiring or termination; and (iii) automated task allocation. (B) education and vocational training, including, but not limited to, all of the following: (i) assessment or grading, including, but not limited to, detecting student cheating or plagiarism; (ii) accreditation; (iii) certification; (iv) admissions; and (v) financial aid or scholarships. (C) housing or lodging, including rental or short-term housing or lodging. (D) essential utilities, including electricity, heat, water, internet or telecommunications access, or transportation. (E) family planning, including adoption services or reproductive services, as well as assessments related to child protective services. (F) health care or health insurance, including mental health care, dental, or vision. (G) financial services, including a financial service provided by a mortgage company, mortgage broker, or creditor. (H) law enforcement activities, including the allocation of law enforcement personnel or assets, the enforcement of laws, maintaining public order or managing public safety. (I) government services. (J) legal services.

11 A.B. 768, supra note 3, § 1550(7).

12 S. 1169, supra note 2, § 86(a).

13Id. § 86(2).

14 Id. § 89(b)(1).

15 Id. § 89(b)(2).

16Id. §§ 86(b), 89(a), 86(4).

17 A.B. 768, supra note 3, § 1550(1).

18 Id. § 1550(3).

19 Id. § 1552(1)(a).

20 Id. § 1552(2)(a).

21 Id. § 1552(5)(a).

22 Id. § 1552(6)(a).

23 N.Y.C. Dep’t of Consumer & Worker Prot., Automated Employment Decision Tools (AEDT) – Frequently Asked Questionshttps://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf

24 Please see the following alert for more information: Maria Caceres-Boneau, et al., New York Proposal to Protect Workers Displaced by Artificial Intelligence, K&L GATES HUB (Feb. 20, 2025), https://www.klgates.com/New-York-Proposal-to-Protect-Workers-Displaced-by-Artificial-Intelligence-2-18-2025

25 A.B. 3265, 2025-2026 Gen. Assemb., Reg. Sess., (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A3265

26 A.B. 3356, 2025-2026 Gen. Assemb., Reg. Sess., (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A3356

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters