1. Home
  2. |Insights
  3. |Can AI Defame? We May Know Sooner Than You Think.

Can AI Defame? We May Know Sooner Than You Think.

Client Alert | 4 min read | 09.21.23

Defamation law carries significant civil liability risk.  Just ask the folks at Fox News, who paid $787.5 million to settle a defamation suit over election-related falsehoods about Dominion Voting Systems, or Alex Jones, who a Connecticut jury recently found liable for $965 million in defamation damages to Sandy Hook victims’ families.

Proving a defamation claim requires a plaintiff to show that a defendant made a damaging false statement of fact to a third party – either with “actual malice” or with negligence or recklessness as to the truth of the statement.  Generative AI tools powered by large language models (“LLMs”), which are AI tools that draw on structures and patterns in language datasets to generate narrative text in response to a user’s request, raise particularly interesting – and, so far, unresolved – questions about how these elements apply to emerging AI technologies.

As generative AI gained broader popular use over the past year, prominent examples have appeared of so-called “hallucinations” – that is, an LLM-backed AI system generating information that, although it sounds factual, has no basis in reality.   For example, earlier in 2023, the legal press widely reported the case of two attorneys who used generative AI to prepare a series of legal briefs, only to then discover – after filing the papers with the court – that the AI-generated caselaw citations were entirely fabricated. 

But how might “hallucinations” raise potential defamation liability?  This is no longer a hypothetical, and is being actively litigated in a first-in-the-nation case before a federal court in the Northern District of Georgia.  In June 2023, radio host Mark Walters sued AI powerhouse OpenAI for defamation under Georgia law.  See Walters v. OpenAI, LLC, No. 1:23-cv-03122 (N.D. Ga.).  Mr. Walters alleges that OpenAI’s ChatGPT “hallucinated” facts, including allegedly-unfounded statements about past financial improprieties, when a reporter used ChatGPT to research and summarize a legal complaint.   Id. at ECF No. 1

OpenAI filed its initial motion to dismiss in late July.  Id. at ECF No. 12.  The motion argues that ChatGPT makes users well-aware of the inherent risk of occasional misleading information.  In this specific case, OpenAI’s motion notes that when the reporter asked ChatGPT to summarize the complaint, ChatGPT responded several times with disclaimers, including that ChatGPT could not access the underlying document and that the reporter needed to consult a lawyer to receive “accurate and reliable information” about it.  Id.

Beyond these fact-specific defenses, there are several potential legal defenses to defamation claims about LLM-generated outputs that are available even at the dispositive motion stage.  These include:

  1.  “Hallucinations” are not the result of human choice or agency, and thus cannot meet either the “actual malice” or “reckless disregard” intent requirements for defamation claims. 
  2. By its nature, generative AI is experimental.  Industry-standard guidance on the responsible use of AI emphasizes that human reviewers should verify the accuracy of LLM-generated outputs before relying on them in any way.  With such an understanding, defendants can credibly argue that no reasonable recipient could understand AI-generated content to be intended as a statement of fact, meaning that as a matter of law a statement cannot be libelous.  
  3. Relatedly, LLM-based technologies often employ disclaimers to make clear the risk of hallucination to further discredit that LLM-generated outputs can be considered “statements of fact.”  For example, OpenAI warns specifically that: “ChatGPT may produce inaccurate information about people, places, or facts.”  And ChatGPT users must affirmatively agree to “take ultimate responsibility for the content being published.” 
  4. Generative AI programs do not “publish” statements, but merely create draft content that a user can ultimately choose to publish or not publish.  Just as word processing software or web-based research service do not “publish” statements, the defense would go, an LLM-powered generative AI program is simply a tool to assist in the drafting process.
  5. LLM-generated outputs are, in whole or in part, the product of the user’s input to the program – or, in Mr. Walters’ case, the user’s repeated inputs to the program – meaning that the user is an active participant in creating any alleged statement of fact.
  6. Any allegedly-false information is likely the product of previously-published material contained in a generative AI model’s training dataset.  This would not be a total defense, as many state laws impose defamation liability even for “reposting” defamatory material, but this argument could be a solid tool in a mitigation or comparative-fault analysis.

OpenAI raised several of these defenses in its motion.  Id. at ECF No. 12.  In response to OpenAI’s motion, Mr. Walters amended his complaint (id. at ECF No. 30), to which OpenAI will respond by October 13 (id. at ECF No. 32).  Watch this space.

Insights

Client Alert | 3 min read | 12.13.24

New FTC Telemarketing Sales Rule Amendments

The Federal Trade Commission (“FTC”)  recently announced that it approved final amendments to its Telemarketing Sales Rule (“TSR”), broadening the rule’s coverage to inbound calls for technical support (“Tech Support”) services. For example, if a Tech Support company presents a pop-up alert (such as one that claims consumers’ computers or other devices are infected with malware or other problems) or uses a direct mail solicitation to induce consumers to call about Tech Support services, that conduct would violate the amended TSR. ...