All posts

Your AI Conversations Are Not Confidential — And a Federal Court Just Said So

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled from the bench in United States v. Heppner that documents a criminal defendant generated using the consumer version of Anthropic's Claude were protected by neither the attorney-client privilege nor the work product doctrine. A week later, he issued a written opinion calling it a matter of "nationwide" first impression.

I think parts of the court's reasoning are wrong — or at least underdeveloped — in ways that matter. But the opinion landed on a real problem. Lawyers, clients, and judges are making consequential decisions about AI tools without fully understanding how those tools handle data. Heppner is worth examining less for the doctrine it announces than for the knowledge gap it reveals.

This post lays out what happened in Heppner, explains what I think the opinion gets right and wrong, and then walks through what Anthropic's data-handling policies actually say across Claude's consumer and commercial tiers — the very policies the court relied on but did not examine closely. The same structural divide exists across every major LLM provider, and the legal implications extend well beyond this one case.

What Heppner held

Bradley Heppner, a former executive facing federal securities fraud charges, used the consumer version of Claude to analyze his legal exposure and develop defense theories. Federal agents seized the AI-generated documents from Heppner's devices. The government sought their production; Heppner resisted, invoking attorney-client privilege and the work product doctrine.

Judge Rakoff rejected both claims on multiple grounds. On privilege, the court articulated three independent reasons for denial:

First, Claude is not an attorney. It has no law license, owes no fiduciary duties, and cannot form an attorney-client relationship. Privilege requires a "trusting human relationship" with "a licensed professional" — and an AI tool is not one.

Second, Heppner had no reasonable expectation of confidentiality. The court pointed to Anthropic's privacy policy, which disclosed that user inputs and outputs could be used for model training and disclosed to third parties, including government authorities.

Third, Heppner was not communicating with Claude at the direction of counsel to obtain legal advice. Claude's own disclaimer states it cannot provide legal advice. This distinguished the case from the Kovel doctrine, which protects communications with non-lawyer professionals hired by attorneys.

On work product, defense counsel conceded that Heppner created the documents "of his own volition" and that the legal team "did not direct" him to use Claude. Without attorney direction, work product protection does not attach.

Where I think the reasoning falters

The first and third grounds — no attorney-client relationship, no direction by counsel — are each independently sufficient to resolve the case. An AI tool is not a lawyer, and Heppner was not acting at his lawyer's direction. Those two findings doom both the privilege claim and the work product claim. Full stop.

The confidentiality analysis in the second ground is where things get shaky, and it is the part of the opinion that has generated the most commentary — and the most anxiety.

Judge Rakoff treated Anthropic's consumer privacy policy as establishing that Heppner could have "no reasonable expectation of confidentiality" in his AI conversations. But the court's analysis has a significant gap: it never inquired into whether Heppner had opted out of model training (a setting available in Claude's consumer interface) or whether he was using a paid tier. The opinion treats the default consumer terms as conclusive without examining what the user actually agreed to or configured.

This matters because the confidentiality holding — which was not necessary to the result — is the part of the opinion most likely to be cited broadly. And it rests on an incomplete factual record. As the policy comparison below demonstrates, Anthropic's consumer terms create meaningfully different data-handling regimes depending on whether a user has opted in or out of model training. The court did not grapple with that distinction.

There is also a subtler problem. The opinion conflates a platform's contractual permission to use data with the practical likelihood that any human will ever see it. Consumer AI privacy policies reserve broad rights, but the actual probability of a specific conversation being reviewed by a person — absent a safety flag or legal process — is vanishingly low. Whether that distinction should matter for privilege purposes is a genuinely hard question. Heppner does not engage with it.

None of this means the opinion is unimportant. It is the first federal decision to address AI and privilege head-on, and it will shape how courts and litigants think about these issues going forward. But its broadest holding — that consumer AI use necessarily destroys confidentiality — rests on reasoning that future courts should scrutinize carefully.

What the case gets right: a knowledge problem

Where Heppner is most valuable is as a signal. Whatever one thinks of the doctrinal analysis, the case exposes a widespread failure to understand how consumer AI tools handle data. Heppner apparently did not know — or did not care — that his AI conversations were governed by terms that reserved broad data-use rights for the platform provider. His lawyers did not anticipate that their client's independent AI use would create a discovery problem. And the court itself did not dig into the specific settings or tier the defendant used.

This is not an isolated failure. Most lawyers I talk to cannot articulate the difference between a consumer and enterprise AI deployment. Most clients do not read privacy policies. And most courts have not yet had to think carefully about how AI data handling intersects with privilege doctrine.

Heppner should change that — not because its reasoning is airtight, but because it demonstrates what happens when no one in the room understands the technology well enough to ask the right questions.

What Anthropic's policies actually say

Since Heppner turned on Anthropic's terms, this is the right place to start. I went through Anthropic's published policies — the Consumer Terms of Service, the Commercial Terms of Service, the Privacy Policy, and the Privacy Center — to compare what Claude's consumer and commercial tiers actually promise. What follows is a synthesis of that research.

The core divide: consumer terms vs. commercial terms

Anthropic's policies split along two fundamental lines: Consumer Terms (Free, Pro, Max) and Commercial Terms (Team, Enterprise, API, Education, Government). This distinction — not the price paid — determines virtually every data right the user holds. The Commercial Terms state explicitly: "Services under these Terms are not for consumer use. Our consumer offerings (e.g., Claude.ai) are governed by our Consumer Terms of Service instead."

This means a Pro or Max subscriber paying $20 or $100 per month operates under the same legal framework as a free user. Paying more buys additional model access and features, but it does not change how Anthropic treats your data.

Model training: the sharpest divide

For Free, Pro, and Max users, Anthropic may use conversations to train its models — and since September 28, 2025, this is the default. Users who did not affirmatively opt out before the deadline were enrolled in training. Opting out remains available through Claude's settings, but the burden is on the user to act.

For Team, Enterprise, API, and Education/Government users, Anthropic contractually prohibits itself from training on customer content. The Commercial Terms are unambiguous: "Anthropic may not train models on Customer Content from Services" — with no exceptions and no reliance on user-level toggles.

Data retention: a 60× gap

Retention periods are directly tied to training status for consumer plans, creating a striking disparity:

Consumer users who have opted in to training (or failed to opt out) face retention of up to five years for de-identified conversation data. Consumer users who have opted out see their conversations retained for 30 days before deletion. In either case, content flagged for safety or policy violations can be retained for up to seven years, regardless of the user's training preference.

On the commercial side, API input and output logs are retained for seven days. Enterprise accounts default to 30 days, with the option to negotiate Zero Data Retention — under which inputs and outputs are processed in real time and not stored at all. No consumer plan, regardless of price, offers true zero retention.

Data ownership and IP

The Commercial Terms contain an unusually strong ownership clause absent from the consumer terms. They provide that the customer "retains all rights to its Inputs, and owns its Outputs," that "Anthropic disclaims any rights it receives to the Customer Content under these Terms," and that Anthropic "hereby assigns to Customer its right, title and interest (if any) in and to Outputs."

Consumer users have no equivalent contractual assignment. Under the consumer framework, Anthropic holds a license to use inputs and outputs for model improvement unless the user opts out.

Data controller vs. data processor

This distinction carries significant weight under GDPR and analogous privacy regimes. For consumer plans, Anthropic acts as the data controller — it determines the purposes and means of processing user data. For Enterprise and API accounts, Anthropic functions as a data processor operating under a Data Processing Addendum, with the commercial customer serving as the controller.

The practical consequence: a consumer user's data is governed by Anthropic's privacy choices. An enterprise customer's data is governed by the customer's own policies, with Anthropic acting under instruction.

Employee access and confidentiality

For consumer plans, Anthropic employees may access conversations only if the user explicitly consents via feedback, or if access is required for Usage Policy enforcement — in which case only the Trust & Safety team may view content on a need-to-know basis.

For commercial plans, customer content is contractually designated as Confidential Information under the Commercial Terms. Anthropic may use it only to exercise its rights under the contract and must protect it with at least the same care it applies to its own confidential information.

Two further protections — Zero Data Retention and HIPAA Business Associate Agreements — are available exclusively on commercial tiers. Under ZDR, inputs and outputs are not stored; the sole exception is User Safety classifier results retained for Usage Policy enforcement. A BAA imposes specific configuration requirements and excludes certain features (web search, for instance, falls outside BAA coverage). Neither protection is available on any consumer plan at any price point.

The comparison distills to a structural reality: consumer Claude users — whether free or paying $100 per month — operate under terms that allow Anthropic to train on their data by default, retain it for up to five years, and act as the data controller with broad discretion. Commercial Claude users operate under a contractual regime that prohibits model training, treats their content as confidential information, assigns them ownership of outputs, and offers zero-retention options.

The pattern holds across providers

Anthropic's tiered structure is not an outlier. OpenAI's ChatGPT follows the same pattern. On Free and Plus plans, OpenAI's Data Usage for Consumer Services FAQ states that it "may use" consumer content to improve its models unless the user disables training — while retaining the right to log interactions for safety and abuse monitoring regardless. On Edu and Enterprise plans, OpenAI commits not to train on business data, provides admin-controlled retention windows, and offers Zero Data Retention and configurable data residency.

The structural divide is the same: consumer terms grant the provider broad data-use rights with an opt-out toggle; commercial terms prohibit model training by contract and give the customer control over retention, residency, and access. Google's Gemini, Meta's Llama-based offerings, and other major LLM providers follow similar patterns. The consumer-versus-commercial distinction is an industry-wide architectural choice, not a quirk of any single provider.

This matters for the Heppner analysis because the court's reasoning — resting on the provider's privacy policy and terms of service — would apply with equal force to any consumer LLM deployment, not just Claude.

What this means going forward

Heppner will be cited for the proposition that consumer AI conversations are not confidential. That proposition is probably too broad as stated — it ignores opt-out settings, elides the difference between paid and free consumer tiers that share the same legal framework, conflates contractual permission with practical disclosure risk, and was not necessary to the holding. But it captures something real: consumer AI platforms operate under terms that were not designed with legal privilege in mind, and users who rely on those platforms for sensitive work are taking risks they may not understand.

The practical response is not to avoid AI tools. It is to understand what you are agreeing to when you use them — and to recognize that paying for a subscription does not, by itself, change the legal framework governing your data. For lawyers, that means learning the difference between consumer and commercial deployments and advising clients accordingly. For organizations, it means treating AI procurement as a legal risk question, not just an IT question. And for courts, it means doing the factual work that Heppner did not: examining the specific terms, settings, and tier a user actually employed before concluding that confidentiality has been waived.

The gap between consumer and commercial AI products is wide, it is well-documented, and it is consistent across every major provider. The problem is not that the information is unavailable. The problem is that almost nobody — lawyers, clients, and judges included — reads it.


The Anthropic policy comparison in this post draws on Anthropic's Consumer Terms of Service, Commercial Terms announcement, consumer terms and privacy policy update, and Privacy Center. OpenAI policy references draw on the Data Usage FAQ, platform documentation, and privacy policy.