HB Ad Slot
HB Mobile Ad Slot
Massachusetts Attorney General Clarifies Position on Artificial Intelligence
Wednesday, June 26, 2024

The Massachusetts Attorney General’s Office (AGO) recently issued an advisory clarifying that existing Massachusetts law applies to artificial intelligence (AI) to the same extent as any other product in the stream of commerce.

Massachusetts Attorney General Andrea Campbell became the first attorney general in the country to share such guidance about AI. The advisory opens with a tribute to AI’s potential societal benefits and notes the Commonwealth’s special position in guiding the technology’s development.

However, the advisory’s central purpose is a warning to AI developers, suppliers, and users that Massachusetts law, including the Massachusetts Consumer Protection Act (Chapter 93A), applies to AI. This Act makes it unlawful to engage in unfair or deceptive business acts in the state of Massachusetts.

The AGO shared the following non-exhaustive list of unfair or deceptive AI business acts:

  • Falsely advertising the quality, value, or usability of AI systems.
  • Supplying an AI system that is defective, unusable, or impractical for the purpose advertised.
  • Misrepresenting the reliability, manner of performance, safety, or conditions of an AI system, including statements that the system is free from bias.
  • Offering for sale an AI system in breach of warranty in that the system is not fit for the ordinary purpose for which such systems are used, or that is unfit for the specific purpose for which it is sold where the supplier knows of such purpose.
  • Misrepresenting audio or video of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud.
  • Failing to comply with Massachusetts statutes, rules, regulations, or laws, meant for the protection of the public’s health, safety, or welfare.

The advisory leaves an important note reminding businesses that AI systems are required to comply with privacy protection, discrimination, and federal consumer protection laws.

AI Regulation will Continue to Increase

You can reasonably expect that AI will increasingly be the subject of new regulation and litigation at the state and federal levels. At the national level, the Biden administration issued an Executive Order in October 2023 directing various federal agencies to adjust to the increasing utility and risks of artificial intelligence. In the wake of that Executive Order, the Federal Trade Commission has already taken its first steps toward AI regulation in a proposed rule prohibiting AI from impersonating human beings. The Department of Labor has announced principles that will apply to the development and deployment of AI systems in the workplace, and other federal agencies have also taken action.

In 2024, Colorado and Utah state lawmakers passed their own AI laws that will likely serve as models to other states considering AI regulations. Both the Colorado Artificial Intelligence Act and Utah’s Artificial Intelligence Policy Act serve to bring AI use within the scope of existing state consumer protection laws. Reflecting the AGO’s warning, plaintiffs have already started asserting privacy and consumer claims based on AI technology on business websites.

At the international level, the EU Artificial Intelligence Act of March 13, 2024 is an extensive AI regulation that separates AI applications into different risk levels and regulates them accordingly. Unacceptable risk applications are banned, whereas high risk applications are subject to extensive precautionary measures and oversights. AI developers and suppliers doing business in Europe should consider whether they are subject to the EU AI Act and ensure their product complies.

Preparing for AI Compliance, Enforcement, and Litigation Risks

There are high levels of uncertainty surrounding how AI will be deployed in the future, and how legislators, lawmakers, and courts will apply new and existing laws to the technology.

However, it is likely that compliance obligations and enforcement and litigation risks will continue to increase in the coming years. Businesses should therefore consult with experienced counsel before deploying or contracting to use new AI tools to ensure they are taking effective steps to mitigate these risks. Organizations should consider the following non-exhaustive list of measures:

  • Developing an internal AI policy governing the organization’s and its employees’ use of AI in the workplace.
  • Developing and/or updating due diligence practices to ensure that the organization is aware of how third-party vendors are using, or plan to use, AI, including due diligence concerning what data is collected, transmitted, stored, and used when training AI tools with machine learning.
  • Actively monitoring state and federal laws for new legal developments affecting the organization’s compliance obligations.
  • Ensuring that the organization and its third-party vendors have appropriate and ongoing governance processes in place, including continuous monitoring and testing for AI quality and absence of impermissible bias.
  • Providing clear disclosure language concerning AI tools, functions, and features, including specific notifications when a customer engages with an AI assistant or tool.
  • Modifying privacy policies and terms and conditions to explain the use of AI technology and what opt-out or dispute resolution terms are available to customers.
  • Reviewing and updating existing third-party contracts for AI-related terms, disclosure obligations concerning AI and risk, and liability allocation related to AI.
HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins