In February, a U.S. federal court ruling by Judge Jed Rakoff determined that Bradley Heppner’s conversations with Anthropic’s Claude about his legal situation were not protected by attorney-client privilege or work-product doctrine. This decision, the first of its kind in the U.S., has led to over a dozen major law firms issuing warnings to clients about the risks of using public AI chatbots for legal matters.
The case, United States v. Heppner, saw Judge Rakoff from the Southern District of New York rule that a defendant’s private conversations with Claude AI lacked legal protection. This ruling signifies that AI platforms are not bound by confidentiality obligations like a lawyer would be, and therefore, any discussion held on such platforms can be used as evidence in court against the users.
Heppner, former chairman of GWG Holdings and founder of Beneficent, was charged with securities and wire fraud in November 2025. Before officially hiring defense counsel, he sought legal strategy advice from Claude AI, resulting in documents seized by the FBI becoming part of the legal proceedings. Heppner argued these should remain privileged; however, Rakoff disagreed, stating that AI platforms have no duty of loyalty or ability to form a privileged relationship, with the absence of a reasonable expectation of confidentiality.
Anthropic’s terms of service allow data collection and sharing with third parties, which eliminates any claim of privilege or protection. The rejected argument was further supported by the fact that Heppner had not acted under the guidance of an attorney when using the AI.
Meanwhile, a contrasting decision was made by a federal magistrate in Michigan in Warner v. Gilbarco, Inc., where ChatGPT conversations by a pro se plaintiff were considered protected as work product. Similarly, in Morgan v. V2X, a pro se litigant received a protective ruling. These cases involve self-represented individuals under distinct civil procedure rules.
As a result of Rakoff’s ruling, law firms are cautioning against using AI platforms for legal matters, with New York firm Sher Tremonte going as far as altering client contracts to address AI-related confidentiality issues. The advice from firms like Orrick and Crowell & Moring is now to treat public AI platforms as non-confidential and to avoid using them for legal concerns unless directed by an attorney, favoring private AI deployments that safeguard input privacy.
