HB Ad Slot
HB Mobile Ad Slot
AI “Hallucinations” Can Inflict Real-World Pain
Tuesday, June 13, 2023

In filing a lawsuit many have dubbed “the first of its kind,” a radio host in Georgia is claiming that OpenAI, the company behind the artificial intelligence chat platform “ChatGPT,” is liable for defamation.

The plaintiff, Mark Walters, filed a complaint last week in the Superior Court of Gwinnett County, Georgia, alleging that ChatGPT published “false,” “malicious” and “libelous matter” about Walters to a third-party journalist, Fred Riehl.

According to the complaint, Riehl, a ChatGPT subscriber, was reportedly using the service to research a federal lawsuit in Washington brought by the Second Amendment Foundation (SAF) against the Washington attorney general and the Washington assistant attorney general. Riehl asked the chatbot to provide a summary of the allegations from SAF’s complaint, and the chatbot replied that Walters was a defendant in the case, served as SAF’s treasurer and chief financial officer, and was accused of defrauding and embezzling funds from the organization.

However, every statement that ChatGPT made about Walters was apparently false. Walters notes that he was not a defendant in SAF’s case, he never served as its treasurer or chief financial officer, and he was not accused of defrauding the organization.

 Walters also alleges that when Riehl specifically asked ChatGPT to provide support for its statements about Walters, by sharing both a portion of the complaint and the complaint in its entirety, the bot obliged. But the results it provided were also completely false, apparently bearing “no resemblance to the actual complaint” Riehl was writing about.

To his credit, Riehl contacted one of the actual parties to the lawsuit, who confirmed that Walters was not involved in the matter at all. Riehl did not publish any of the allegedly defamatory statements about Walters, and Walters does not name Riehl as a defendant in his case against OpenAI.

However, the facts at issue in the case highlight the potential hazards for users of AI chatbots. Indeed, whereas the suit may mark the first time a court may consider whether a company providing AI-tools could be liable under defamation law, the law seems pretty clear that users of AI-tools could incur serious risks. For example, under the republication liability rule followed by many states, including Illinois, the republisher of a defamatory statement from another source (even a bot) could be liable for defamation, even if the republisher says “I got this [false information] from ChatGPT,” or from a seemingly legitimate source that a bot actually fabricated.  See, e.g., Brennan v. Kadner, 351 Ill. App. 3d 963, 970, 814 N.E.2d 951, 959 (2004) (discussing the republication rule in Illinois). Of course, users who are sued for republishing such information may be able to rely on existing defenses under the First Amendment or Section 230 of the Communications Decency Act, but the potential risks in relying on chatbots are concerning.

And this is not the first time AI programs have reportedly shared false information with users. In April, a mayor in Australia publicly mulled the idea of suing OpenAI if the company did not address ChatGPT’s inaccurate claims implicating him in a bribery scandal. Around the same time, a law professor at The George Washington University published an op-ed describing how the service falsely accused him of sexually harassing his students. And just this past week, a personal injury attorney in New York was forced to explain his actions after mistakenly relying on ChatGPT for legal research and citing entirely nonexistent caselaw in a brief.

In response to these concerns, companies with artificial intelligence chatbots have acknowledged issues with their services responding to user prompts with “hallucinations” (an industry term). OpenAI, for example, recently said in a post published last month that it would train models to better detect hallucinations going forward.

It will be interesting to see how OpenAI responds to Walters’s lawsuit and whether ChatGPT, and others like it, will become more factually accurate as the technology develops. In the interim, however, users should be especially cautious about chatbox “hallucinations” resulting in real-world liability.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins