HB Ad Slot
HB Mobile Ad Slot
Massachusetts AG Says Consumer Protection, Civil Rights, and Data Privacy Laws Apply to Artificial Intelligence
Wednesday, April 17, 2024

Massachusetts Attorney General Andrea Campbell issued an advisory (“Advisory”) warning to developers, suppliers, and users of artificial intelligence and algorithmic decision-making systems (collectively, “AI”) about their respective obligations under the Massachusetts’ Consumer Protection Act, Anti-Discrimination Law, Data Security Law and related regulations. There is not much surprising here, as the Advisory addresses many of the same issues raised in the White House Executive Order and Federal Trade Commission (FTC) guidance. It is helpful however in clarifying, for consumers, developers, suppliers, and users of AI systems, specific aspects of existing state laws and regulations that apply to AI and that these laws and regulations apply to the same extent as they apply to any other product or application within the stream of commerce.

The Advisory recognizes the benefits of AI but flags some risks to consumers including bias, lack of transparency or explainability, implications for data privacy, and more. It also notes deceptive uses including chatbots used to perpetrate scams or to surreptitiously collect sensitive personal data from consumers, deepfakes, and voice cloning used for the purpose of deceiving or misleading a listener about the speaker’s true identity. The Advisory seeks to address and ultimately mitigate these risks by clarifying for consumers, developers, suppliers, and users of AI systems that existing state laws and regulations apply to AI. Among other things, the Advisory provides the following, non-exhaustive list of examples.

It is unfair or deceptive to:

  • Falsely advertise the quality, value, or usability of AI systems. 940 Code Mass. Regs. 3.02(2). An example of false advertising is where a supplier claims that an AI system has functionality that it does not possess;
  • Supply an AI system that is defective, unusable, or impractical for the purpose advertised. 940 Code Mass. Regs. 3.02(4)(d). Suppliers have an obligation to ensure that an AI system performs as intended. American Shooting Sports Council, Inc. v. Attorney Gen., 429 Mass. 871, 877 (1999) (the failure to meet fundamental performance standards is particularly “unfair” or “deceptive” where “harmful or unexpected risks or dangers inherent in the product, or latent performance inadequacies, cannot be detected by the average user or cannot be avoided by adequate disclosures or warnings.”);
  • Misrepresent the reliability, manner of performance, safety, or condition of an AI system. 940 Code Mass. Regs. 3.05(1). Examples of misrepresentation include claims or representations that an AI system is fully automated when its functions are performed in whole or in part by humans, as well as untested and unverified claims that an AI system performs functions with equal accuracy to a human, is more capable than a human at performing a given function, is superior to non-AI products, is free from bias, is not susceptible to malicious use by a bad actor, or is compliant with state and federal law;
  • Offer for sale or use an AI system in breach of warranty in that the system is not fit for the ordinary purposes for which such systems are used, or that is unfit for the specific purpose for which it is sold where the supplier knows of such purpose. 940 Code Mass. Regs. 3.01; 940 Code Mass. Regs. 3.08(2); Maillet v. ATF-Davidson Co., 407 Mass. 185, 193 (1990) (“Generally, a breach of warranty constitutes a violation of G.L. c. 93A, § 2.”) For example, offering for sale or use an AI system that is not robust enough to perform appropriately in a real-world environment as compared to a testing environment is unfair and deceptive;
  • Misrepresent audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud. 940 Code Mass. Regs. 3.05 (1); and
  • Fail to comply with Massachusetts “statutes, rules, regulations or laws, meant for the protection of the public’s health, safety or welfare.” 940 Code Mass. Regs 3.16(3).

The Advisory further warns:

  • Selling or using an AI system in a manner that violates federal consumer protection statutes, including the Federal Trade Commission Act may be a violation of Chapter 93A, § 2. 940 Code Mass. Regs. 3.16(4), noting the FTC has taken the position that deceptive or misleading claims about the capabilities of an AI system, and the sale or use of AI systems that cause harm to consumers violate the Federal Trade Commission Act. This includes any AI that impersonates a government, businesses, or their officials.
  • AI systems must also comply with the Commonwealth’s Standards for the Protection of Personal Information of Residents of the Commonwealth, promulgated under Chapter 93H. This means that AI developers, suppliers, and users must take the necessary and appropriate steps to safeguard personal information used by those systems, see 201 Code Mass. Regs. 17.03 & 17.04, and are expected to comply with the breach notification requirements set forth in the statute. Violations of Chapter 93H are expressly subject to enforcement under Chapter 93A. G.L. c. 93H, § 6.
  • The Commonwealth’s Anti-Discrimination Law, G.L c. 151B, § 4, prohibits developers, suppliers, and users of AI systems from deploying technology that discriminates against residents based on a legally protected characteristic. This includes algorithmic decision-making that relies on or uses discriminatory inputs and that produces discriminatory results, such as those that have the purpose or effect of disfavoring or disadvantaging a person or group of people based on a legally protected characteristic. Violations of Chapter 151B may constitute an unfair and deceptive act or practice, and thus may give rise to liability under Chapter 93A. See 940 Code Mass. Regs 3.16(3).
  • State attorneys general are empowered to enforce certain federal consumer protection, anti-discrimination, and other laws applicable to AI. See 12 U.S.C. § 5481, 5552. For example, the adverse action notification requirements under the federal Equal Credit and Opportunity Act (ECOA), the primary federal law that prohibits discrimination in credit, applies to AI models. This means that covered creditors must provide accurate and specific reasons to consumers indicating why their loan applications were denied, including in circumstances where the creditor uses AI models.

This Advisory is another example of the wave of guidance from states on how they believe existing laws apply to AI, as some states also craft new, AI-specific laws to cover activities that may not fit neatly into existing law.

Listen to this post

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins