A recent SDNY bench ruling in *United States v. Heppner* found that generative AI outputs, independently created by a defendant, are not protected by attorney-client privilege or the work product doctrine. This decision underscores the critical importance of attorney direction and confidentiality in AI tool usage for legal analysis. Companies must reconsider their governance frameworks for AI to preserve privilege.
On February 10, 2026, U.S. District Judge Jed Rakoff of the Southern District of New York issued a pivotal bench ruling with significant implications for the intersection of artificial intelligence and legal privilege. The decision, stemming from United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Oct. 28, 2025), established that a defendant's independent use of generative AI to analyze legal exposure does not confer protection under attorney-client privilege or the work product doctrine. This ruling challenges prevailing assumptions regarding the confidentiality of AI-generated legal insights and mandates a re-evaluation of current practices.
The case involved Bradley Heppner, a former financial services executive facing securities fraud charges. Heppner utilized Anthropic’s Claude, a third-party generative AI tool, to input prompts concerning the government’s investigation and his potential legal exposure. These prompts incorporated facts learned from his counsel, and the platform subsequently generated written responses. The core issue arose when approximately thirty-one AI-generated documents, comprising both prompts and outputs, were seized from Heppner's electronic devices during his arrest on November 4, 2025. Defense counsel asserted privilege over these materials, arguing they were prepared for discussions with counsel and later shared with them, despite conceding they were created on the defendant’s own initiative, not at counsel’s direction.
Judicial Rationale: Why AI Outputs Lacked Privilege
Judge Rakoff's ruling decisively rejected the assertion of privilege, outlining several key reasons why the AI documents were not protected. The court emphasized that traditional privilege requirements—confidentiality, attorney involvement, and preparation at counsel’s direction—remain paramount, irrespective of the sophistication of the technology employed.
AI Platforms Are Not Legal Counsel
The attorney-client privilege protects confidential communications between a client and an attorney for the purpose of obtaining legal advice. The court found that the AI documents did not constitute communications with an attorney. Instead, they were the result of research activity. Notably, Anthropic’s Claude provides explicit warnings that users should consult a “qualified attorney” when addressing legal matters, reinforcing that the AI tool itself does not provide legal advice.
This distinction is critical. Independent querying of an AI tool, even for legal analysis, is not equivalent to seeking advice from a licensed legal professional. The AI's outputs are akin to conducting a Google search or consulting library books, which, while potentially informative, do not inherently become privileged simply because the user later discusses them with counsel.
Confidentiality Compromised by Platform Terms
A fundamental requirement for attorney-client privilege is the preservation of confidentiality. The court determined that Heppner lacked a reasonable expectation of privacy in his AI prompts and outputs due to the terms of service of the Claude platform. Publicly accessible AI tools like Claude often reserve rights to retain, train on, and disclose user data.
Specifically, Claude's terms indicated it might disclose data to “governmental regulatory authorities” and “third parties.” This lack of robust confidentiality protections undermined any claim that the communications were made in confidence. The ruling left open the question of whether a “closed” enterprise AI environment, specifically designed to protect confidentiality, might yield a different outcome.
Work Product Requires Attorney Direction
The work product doctrine shields materials prepared by or at the direction of counsel in anticipation of litigation. In Heppner’s case, the defendant acted independently in generating the AI materials. The fact that he later shared these outputs with his counsel did not retroactively confer work product protection.
The government's analogy was illustrative: if Heppner had conducted Google searches or checked out books from a library, those records would not be protected merely because he later discussed their contents with his attorney. The court indicated that the analysis “might be different” if counsel had explicitly directed the defendant to conduct the AI searches, highlighting the necessity of attorney oversight for work product protection.
Practical Implications for Enterprise AI Use
This ruling carries significant implications for executives, compliance leaders, and legal professionals increasingly leveraging generative AI tools for legal and regulatory exposure analysis, fact organization, and strategic decision-making. Without careful structuring, these AI interactions risk generating discoverable material rather than privileged insights.
Three practical considerations emerge:
- Independent AI use creates discoverable material: Unsupervised use of AI to explore legal exposure or regulatory issues, even in preparation for counsel discussions, can produce non-privileged documents.
- Enterprise governance is paramount: If an AI platform's terms permit data retention, training, or disclosure to regulators, privilege claims may fail. Procurement and governance strategies must weigh litigation risk alongside cybersecurity and privacy concerns.
- Structure and process are outcome-determinative: While this decision did not directly address counsel-directed use on secure enterprise platforms with strict confidentiality terms, such distinctions are likely to be critical. Attorney involvement in structuring and supervising prompts in real-time, as part of litigation preparation, could be decisive.
Companies should treat AI as a powerful, yet potentially disclosure-prone, tool, not an inherent legal advisor. Thoroughly evaluating an AI tool's confidentiality provisions—whether it's a closed, enterprise program or a public retail offering—is essential. Understanding how AI models train their systems and whether they use a closed set of client-provided documents or all user-collected data is also crucial for privilege preservation.
Key Takeaways
- Independent use of generative AI for legal analysis, without attorney direction, does not confer attorney-client privilege or work product protection.
- Publicly accessible AI platforms with terms allowing data retention, training, or disclosure to third parties undermine the confidentiality essential for privilege claims.
- Attorney direction is a critical factor for establishing work product protection for AI-generated materials.
- Companies must implement robust AI governance frameworks and involve counsel early to ensure privilege preservation when using AI tools for legal matters.
- Careful selection of AI platforms, prioritizing those with strong confidentiality and data isolation features, is vital.
What Comes Next
The United States v. Heppner ruling serves as an early, emphatic reminder that courts are unlikely to expand traditional privilege doctrines simply due to technological advancement. The core tenets of confidentiality, attorney involvement, and preparation at counsel’s direction will remain the touchstones for privilege claims. As AI becomes increasingly integrated into corporate governance and compliance functions, the preservation of privilege will depend less on the technology itself and more on the meticulous manner in which it is deployed and managed. Boards, executives, and compliance leaders must recognize this decision as a clear directive to approach AI use with the same rigorous care applied to any other sensitive legal communication, ensuring that strategic implementation aligns with established legal principles to safeguard critical information. This case sets a precedent that will undoubtedly shape future judicial interpretations of AI's role in legal practice and corporate compliance.
Key Highlights
SDNY ruled generative AI outputs not privileged if not attorney-directed.
Lack of confidentiality in public AI platforms undermines privilege claims.
Attorney direction is crucial for work product doctrine applicability to AI.
Companies must establish robust AI governance to preserve legal privilege.
Independent AI use for legal analysis can create discoverable material.



