HB Ad Slot
HB Mobile Ad Slot
Connecticut Pending AI Legislation: Comprehensive AI Legislation to Impact All Businesses Utilizing Artificial Intelligence
Wednesday, May 29, 2024

Connecticut is set to pass SB 2, titled “An Act Concerning Artificial Intelligence,” which presents a comprehensive legislative framework for regulating AI systems within the state. This act addresses various aspects of AI deployment and development, focusing on mitigating algorithmic discrimination, ensuring transparency, and promoting accountability among AI developers and deployers.

If you do any sort of business utilizing AI in the state of Connecticut, this will probably affect you!!

Here are the highlights:

Section 1: Definitions (I’ll leave these here for reference. You’ll need them later.)

To start “Algorithmic discrimination” refers to any condition where an AI system causes unjustified differential treatment based on characteristics like age, color, disability, ethnicity, and other protected classes. It excludes self-testing by developers and efforts to increase diversity.

An “artificial intelligence system” is defined as a machine-based system that generates outputs, such as content, decisions, predictions, or recommendations, based on received inputs, influencing physical or virtual environments.

The term “consequential decision” refers to decisions that significantly affect a consumer’s access to essential services, including criminal justice, education, employment, financial services, healthcare, housing, insurance, or legal services.

A “consumer” is any individual residing in Connecticut.

Deploy” means using a generative or high-risk AI system, and a “deployer” is any person or entity deploying these systems within the state.

A “developer” refers to any person or entity developing or significantly modifying AI systems in Connecticut.

Generative artificial intelligence system” encompasses AI systems capable of producing or manipulating synthetic digital content.

High-risk artificial intelligence system” refers to AI systems designed to make or influence consequential decisions.

Intentional and substantial modification” involves deliberate changes to AI systems that affect compliance or purpose, potentially increasing discrimination risks.

Synthetic digital content” includes any AI-generated or manipulated digital content (i.e. audio, images, text, or video).

Section 2: Developer Responsibilities

From July 1, 2025, developers must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination. Compliance with the act creates a rebuttable presumption of reasonable care. Developers offering high-risk AI systems must provide deployers with detailed documentation, including system limitations, training data, and measures to mitigate discrimination risks. This documentation helps deployers understand and manage these systems. Developers must also publicly disclose the types of high-risk AI systems they develop and how they manage discrimination risks, ensuring transparency and accountability.

Section 3: Deployer Responsibilities

Starting July 1, 2025, deployers of high-risk AI systems must implement a risk management policy and program to identify and eliminate discrimination risks. This policy should align with recognized risk management frameworks. Deployers or third-party contractors must complete impact assessments before deploying high-risk AI systems and after any significant modifications. These assessments analyze the potential for algorithmic discrimination and document steps taken to mitigate such risks. Deployers must notify consumers when high-risk AI systems make consequential decisions affecting them and review AI deployments annually to ensure they do not cause discrimination.

Section 4: General-Purpose AI Model Requirements

Developers of general-purpose AI models must create and maintain technical documentation, including details on training and testing processes and evaluation results. This documentation helps deployers understand the AI model’s capabilities and limitations. Developers must also provide documentation enabling deployers to comply with their obligations under the act. Additionally, developers must establish a policy to respect federal and state copyright laws and make a summary of the content used to train AI models publicly available. These requirements promote transparency and accountability in AI development.

Section 5: Disclosure of AI Interaction

AI systems interacting with consumers must disclose their AI nature, unless it is reasonably obvious, or the system is not directly consumer-facing. This transparency ensures consumers are aware when they are interacting with AI.

Sound familiar? See Utah.

Section 6: Marking Synthetic Digital Content

Developers of AI systems generating synthetic digital content must ensure such content is clearly marked as synthetic in a machine-readable format. This marking must be detectable by consumers at the first interaction, ensuring they are aware the content is AI-generated. Exemptions apply for systems performing assistive editing, minor alterations, or use in crime prevention.

Section 7: Consumer Disclosure

Deployers must disclose to consumers that content has been artificially generated or manipulated at the first interaction. Exceptions include artistic, satirical, or fictional works, and text that has undergone human review.

Section 8: Exemptions and Protections

Developers and deployers are protected from disclosing trade secrets and must comply with all relevant federal, state, and municipal laws. The act also emphasizes cooperation with law enforcement and compliance with subpoenas.

Section 9: Enforcement Authority

The Attorney General and the Commissioner of Consumer Protection have exclusive authority to enforce SB 2’s provisions. They must issue a notice of violation before taking enforcement action, providing a 60-day period for developers and deployers to cure violations. Developers and deployers can defend against enforcement actions by demonstrating compliance with recognized AI risk management frameworks.

Section 11: CHRO Enforcement of Algorithmic Discrimination

This section mandates that the Commission on Human Rights and Opportunities (“CHRO”) enforce provisions related to algorithmic discrimination – specifically ensuring deployers of high-risk AI systems use reasonable care to protect consumers from algorithmic discrimination risks.

Section 12: CHRO Powers and Duties Update

The powers and duties of the CHRO are updated to include enforcing compliance with AI impact assessments and managing risks related to algorithmic discrimination.

Section 16: Artificial Intelligence Advisory Council

The establishment of an Artificial Intelligence Advisory Council is outlined to engage stakeholders, study AI laws from other states, and recommend legislation. This council includes representatives from various sectors.

Section 18: Applicability of Election Laws to AI

This section clarifies that the existing election laws apply to AI-generated content.

Section 19: Prohibition on Deceptive Media in Elections

This section prohibits the distribution of deceptive media generated by AI during election periods. Exceptions include media with clear disclaimers and parodies. Violations can lead to criminal charges and civil actions.

Again, just the highlights….

Here is the whole thing: 2024SB-00002-R03-SB.PDF (ct.gov)

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins