Texas Attorney General Ken Paxton announced what he calls a “first-of-its-kind settlement” with Dallas-based artificial intelligence healthcare technology company, Pieces Technologies. The assurance of voluntary compliance (the “Assurance”) resolves allegations that the company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its products.
The Allegations
Pieces Technologies offers a product for hospitals to provide real time patients’ healthcare data for analysis leveraging generative AI to ultimately “summarize” patients’ condition and treatment for hospital staff.
While advertising and marketing its products, Pieces Technologies made representations regarding the accuracy of its products and services by claiming an error rate or “severe hallucination rate” of “<1 per 100,000.” Attorney General Paxton’s investigation found these metrics were “likely inaccurate and may have deceived hospitals about the accuracy and safety of the company’s products.” As a result of the investigation, the settlement accused Pieces Technologies of violating the Texas Deceptive Trade Practices Consumer Protection Act (DTPA). However, Pieces Technologies issued a comment to Law360 on Wednesday, stating that the announcement is "wholly inconsistent" with the assurance of voluntary compliance agreement that Pieces entered as “the [assurance of voluntary compliance] makes no mention of the safety of Pieces products, nor is there evidence indicating that the public interest has ever been at risk."
The Settlement
As a part of the Assurance, for five years after the Assurance’s effective date Pieces Technologies must:
- Provide Clear and Conspicuous Disclosures: Clearly and conspicuously disclose (1) the meaning or definition of any metric, benchmark, or similar measurement; and (2) the method, procedure, or any other process used to calculate the metric, benchmark, or similar measurement used in marketing or advertising. Alternatively, the company may retain an independent third-party auditor to assess and substantiate.
- Avoid Misleading Claims: Avoid any false, misleading, or unsubstantiated representations regarding any feature, characteristic, function, testing, or appropriate use of any of its products.
- Disclose Financial Ties: Reveal any financial relationships with individuals involved in marketing or advertising.
- Inform Customers of Risks: Provide all its current and future customers, in connection with any of its products or services, documentation that clearly and conspicuously discloses any known or reasonably knowable harmful or potentially harmful uses or misuses of its products or services. This must include, at a minimum:
- the type of data and/or models used to train its products and services;
- a detailed explanation of the intended purpose and use of its products and services, as well as any training or documentation needed to facilitate proper use of its products and services;
- any known, or reasonably knowable, limitations of its products or services, including risks to patients and healthcare providers from the use of the product or service, such as the risk of physical or financial injury in connection with a product or service’s inaccurate output;
- any known, or reasonably knowable, misuses of a product or service that can increase the risk of inaccurate outputs or increase the risk of harm to individuals; and
- for each product or service, all other documentation reasonably necessary for a user to understand the nature and purpose of an output generated by a product or service, monitor for patterns of inaccuracy, and reasonably avoid misuse of the product or service.
The Impact
As this Assurance demonstrates, the responsible development of generative AI is a crucial issue, particularly in sectors like healthcare. Companies must be transparent about their products' capabilities and limitations, both in direct communication with customers and in marketing materials.