1. Home
  2. |Insights
  3. |Federal Court Rules Some AI Chats Are Not Protected by Legal Privilege: What It Means For You

Federal Court Rules Some AI Chats Are Not Protected by Legal Privilege: What It Means For You

Client Alert | 4 min read | 02.18.26

Artificial intelligence tools have already transformed the practice of law, but they come with serious legal risks that are now taking shape. A recent ruling by a federal judge in the U.S. District Court for the Southern District of New York highlights one such risk: certain inputs and outputs from commercial AI models may not be considered privileged attorney-client communications or protected by the work-product doctrine.

On February 10, 2026, Southern District of New York Judge Jed Rakoff ruled orally in a criminal case that prompts and outputs created by a criminal defendant using a public AI tool were neither attorney-client privileged nor protected by the work-product doctrine, even though the defendant argued that he prepared reports synthesizing defense strategy after speaking with his attorneys in anticipation of a potential indictment. Judge Rakoff followed up his bench ruling with a first-of-its-kind written opinion on February 17, 2026.[1]

The defendant, Bradley Heppner, used Anthropic’s AI tool Claude to run queries after he received a grand jury subpoena and it became clear that he was a target of the government’s investigation.  Heppner claimed he spoke with his counsel and then generated approximately 30 documents containing his prompts and Claude’s responses (the AI Documents), which he later shared with his defense attorneys.  The government later seized the AI Documents, which were stored on an electronic device, when executing a search warrant of the defendant’s home at the time of his arrest.

The Defense’s Position

As the defense prosecutors represented to the Government, Heppner created the AI Documents “in order to provide response to attorneys for legal advice” and for the “purpose of discussing the issues” with his counsel.  Further, the defense argued that, because the documents contained information that Heppner learned directly from counsel, they should be covered by both attorney-client privilege and the work-product protection.

Heppner also argued that if he relied on the AI Documents at trial, it could create a “witness-advocate conflict” because the documents contained information from the attorneys, potentially making them witnesses.

The Government’s Position

In opposing the defendant’s claims of attorney-client privilege and work-product protection, the government argued that the AI Documents should not be considered privileged or protected for the following reasons:

Attorney–Client Privilege

The AI Documents “fail each element of the attorney-client privilege.” First, the AI model is “obviously not an attorney.” Second, the AI Documents were not created for the purpose of obtaining legal advice. Third, the AI documents are not confidential but were created by an AI tool created by a third party that is “publicly accessible.” Indeed, Anthropic’s Privacy Policy advises users that prompts and outputs may be used to train AI tools and be disclosed to regulatory authorities and “third parties.” Finally, preexisting, non-privileged materials do not become privileged simply because they are later shared with counsel.

Work-Product Protection

The defendant created the AI Documents on his own initiative and not at the direction of counsel, which is a requirement of work-product protection.

Judge Rakoff’s Ruling

At a pretrial conference, Judge Rakoff ruled from the bench and agreed with the government that the AI Documents were not privileged.

In his follow-on written opinion, Judge Rakoff noted that the AI Documents were not privileged for three reasons: (1) the AI Documents are not communications between the defendant and his counsel because Claude is not an attorney; (2) the communications between Heppner and Claude were not confidential because Claude explicitly disclaims confidentiality and users agree that the information shared and generated by Claude can be shared with third parties; and (3) Heppner did not communicate with Claude to obtain legal advice from Claude, and sharing documents with counsel does not make those documents privileged.

Judge Rakoff also held that the AI documents did not enjoy work-product protection because Heppner did not prepare the AI documents “at the behest of counsel,” and the AI documents did not disclose counsel’s strategy.

Judge Rakoff did find Heppner’s point regarding the potential witness-advocate conflict interesting and suggested it could theoretically cause a mistrial, but he did not base his ruling on this issue.

Key Takeaways

This ruling underscores the risks of using commercial (i.e., public) AI platforms in connection with sensitive legal information or discussions.  However, it leaves the door open to the possibility that AI-generated documents may be protected under certain circumstances. Companies and individuals planning to use AI tools in connection with legal, compliance, or sensitive matters should consider the following best practices to establish privilege and work-product protection:

  • To avoid a potential waiver argument, use only AI tools that are non-public and “closed,” which means the prompts and outputs are not subject to training, not exposed to regulators or third parties, and not subject to privacy policies or terms of use that disclaim the confidentiality of inputs and outputs. As Judge Rakoff warned, quoting a January 2026 Southern District of New York ruling in In re Open AI, Inc. Copyright Infringement Litigation, No. 25 MD 3143, ECF No. 1021[2] at 3 (Jan. 5, 2026): “AI users do not have substantial privacy interests in their ‘conversations with [another publicly accessible AI platform] which users voluntarily disclose[]’ to the platform and which the platform ‘retains in the normal course of its business.’”
  • Clients should discuss intentions to use AI tools with counsel prior to creating documents related to a legal or compliance matter and establish specific AI-related guidelines.
  • To be able to claim work-product privilege, clients should create documents using AI tools only explicitly at the direction of counsel and, when in doubt, refrain from doing so.
  • Stay informed of the risks that these tools present, including the ethical guidelines and case law on this issue.

[1] Available at https://storage.courtlistener.com/recap/gov.uscourts.nysd.652137/gov.uscourts.nysd.652137.27.0.pdf.

[2] In re OpenAI, Inc., Copyright Infringement Litig., No. 25 MD 3143, ECF No. 1021 at 3 (Jan. 5, 2026).

Insights

Client Alert | 2 min read | 02.18.26

DHS Announces Virtual Town Halls on CIRCIA Final Rule

On February 13, 2026, the U.S. Department of Homeland Security (DHS) announced upcoming virtual town hall meetings scheduled for March 2026 regarding the implementation of the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA).  The meetings will allow industry stakeholders to provide input to DHS to refine the “scope and burden” of the forthcoming CIRCIA final rule....