The Office of the Attorney General of Texas (“OAG”) announced a “first-of-its-kind healthcare generative AI” settlement with Pieces Technology, Inc. (“Pieces”). The settlement related to the Texas OAG allegations that Piece’s advertising and marketing claims about the accuracy of its generative artificial intelligence (GenAI) products in violation of the Texas Deceptive Trade Practices – Consumer Protection Act (“DTPA”), Tex. Bus. & Com. Code Ann. § 17.58. The Texas OAG states in its press release that the Piece’s investigation is a “First-of-its-Kind Healthcare Generative AI Investigation.”
Pieces’ AI products are offered to hospitals and other in-patient healthcare facilities to, among other things, assist providers with “summariz[ing], chart[ing], and draft[ing] clinical notes . . . in the Electronic Health Record[(“EHR”)] [system].”
The OAG alleged that Pieces advertised and marketed the accuracy of its AI technology with metrics of a “critical hallucination rate” and a “severe hallucination rate” of less than 0.001%. For GenAI, a “hallucination” is a confidently stated but erroneous or false content. Misrepresenting the rate at which a GenAI system produces hallucination is critical because it directly impacts the reliability and safety of GenAI outputs that relate to diagnoses, treatments and other decisions regarding patient care, as well as the associated health recordkeeping.
Although the DTPA allows for civil penalties of up to $10,000 per violation, injunctions and more, the assurance of voluntary compliance (“Assurance”) entered into by OAG and Pieces does not include any civil penalties, but it does require that, for five years, Pieces must:
- Provide clear and conspicuous disclosure, in advertising and marketing materials, of its output metrics, benchmarks and other analytical measurements that are “consistent with, and substantiated by, the [findings of an] independent, third-party auditor.”
- Refrain from false, misleading, or otherwise unsubstantiated representations regarding the accuracy, reliability, or efficacy of its products and services.
- Provide clear and conspicuous disclosure to current and future customers of “any known or reasonably knowable harmful or potentially harmful uses or misuses of its products and services.” The required disclosures must include certain enumerated disclosures such as, for example, the type of data/models used to train the AI model; any known or reasonably knowable, limitations and risks to patients and healthcare providers of the products and services; and all other documentation reasonably necessary for a user to understand the nature and purpose of an output generated by a product or service.
- Respond to requests for information from the OAG regarding Pieces’ compliance with the Assurance within 30 days of receipt of a written request.
These remedial commitments also can serve as compliance guideposts for other businesses.
The Texas OAG’s action came a few days after the federal Government Accountability Office published its report on the benefits and risks of GenAI use in healthcare and a few days before the Federal Trade Commission (FTC) announced several recent enforcement actions aimed at cracking down on the use of AI (more on the FTC actions in a future post on Privacy World). Together with the GAO report, the OAG’s settlement with Pieces underscores that regulators at all levels of government are closely scrutinizing use of GenAI in healthcare delivery to ensure claims are supported by reliable data, testing and analysis.