Blog

Your attorneys are already using AI. That's the problem.

March 2026 · Phil Komarny

Here's something nobody in your firm wants to say out loud: your people are using ChatGPT. Right now. Today. They're pasting client communications into it to draft responses. They're uploading contracts to get quick summaries. They're feeding it deal terms, case strategy, privileged analysis. They're doing it on their phones, on their personal laptops, during the commute home. And nobody told them to stop because nobody wants to have that conversation.

I get it. The tools are good. Embarrassingly good. An associate who spent three hours summarizing a 200-page contract last year can get 80% of the way there in four minutes now. A paralegal who used to dread conflicts checks can ask a question in English and get something useful back. The productivity gains are real. Nobody's arguing that.

But in February 2026, a federal judge in the Southern District of New York decided something that changes the math for every law firm in the country.

In United States v. Heppner, the court ruled that uploading confidential materials to a consumer AI tool destroys attorney-client privilege.

Not "may compromise." Not "raises questions about." Destroys. The judge ordered OpenAI to produce 20 million conversation logs. Twenty million. That's not a hypothetical risk in a law review article. That's a court order, in a real case, with real consequences, right now.

Think about what that means for your firm. Every time an attorney pastes a privileged memo into ChatGPT, that memo is now potentially discoverable. Every time someone uploads a contract with confidential terms, those terms lose trade secret protection. Every time case strategy gets fed into a consumer AI tool, opposing counsel has a new avenue to explore in discovery. And they will explore it, because they read the same ruling you just read.

The discovery requests are going to change. They're probably already changing. "Identify all instances in which any person used generative AI tools in connection with the subject matter of this litigation." That's a sentence you're going to see in interrogatories. If you're a litigator, you should be putting that sentence in your own discovery requests, too. Because if the other side's people were sloppy with AI, you want to know about it.

Look, I've spent years helping organizations figure out how to use AI without blowing themselves up. And the pattern is always the same. The technology arrives. People start using it before policy catches up. Something bad happens. Then everyone scrambles. I've watched it happen in higher education, in healthcare, in government. Law firms are not special. The same pattern applies. The difference is that the consequences in law are privilege and confidentiality. There's no "oops" that fixes a privilege waiver.

This isn't just a litigation problem. It cuts across everything.

If your firm handles corporate transactions, your associates and clients are probably running due diligence through consumer AI. Financial models, cap tables, deal terms. All of that loses protection the moment it hits a server you don't control. NDAs in most deal rooms prohibit third-party disclosure. Under Heppner, a consumer AI tool is a third party. Full stop.

If you do environmental work, think about what happens when someone uploads a Phase II site assessment to get a quick summary. That data now exists on someone else's infrastructure. Regulators can find it. Opposing parties in toxic tort litigation can request it. The EPA doesn't need your permission to subpoena OpenAI.

If you do insurance coverage work, this is where it gets interesting. Insurers are going to start arguing that policyholders who used consumer AI with confidential business information breached their duty to maintain reasonable security measures. Cyber liability policies require adherence to security protocols. D&O policies have similar conditions. Uncontrolled AI use could void coverage. That's a new category of coverage dispute that barely existed a year ago.

And if you do risk management, congratulations. AI data leakage is now a first-order risk that belongs in every client's risk profile, right next to cyber breach and regulatory exposure. Employee use of consumer AI with confidential data. Vendor AI use with client data. The fact that most AI providers shifted to opt-out training models in August 2025, meaning your data is training their models unless someone remembered to flip a switch.

The firms that get this right will have a structural advantage. Not a marketing advantage. A real one.

Here's the thing. Most firms are going to respond to Heppner by sending a memo. "Don't use consumer AI with client data." The memo will be ignored within a week because the productivity gains are too real, and nobody wants to go back to summarizing contracts by hand. Prohibition doesn't work when the alternative is ten times faster.

The firms that actually solve this will give their people something better to use. Private AI that runs on the firm's own infrastructure. Tools that connect to iManage, to Clio, to SharePoint, to billing systems, to email. Tools that do everything ChatGPT does, except the data never leaves the building. No third-party servers. No training on your data. No conversation logs for someone to subpoena. Privilege intact. Confidentiality intact. And attorneys who are actually more productive than the ones sneaking around with consumer tools, because the private system can see across all the firm's data at once.

That's what we built VaultedMinds.ai to do. Not because we predicted Heppner. Because we watched what was happening in every other industry and knew law would be next. The technology was always going to outrun the policy. The question was whether firms would have an answer ready when the court forced the issue.

The court just forced the issue.

Discovery requests are going to include AI usage questions. Insurers are going to deny claims based on AI data leakage. NDAs are going to need AI-specific language. And every firm that doesn't have a private AI strategy is going to spend the next two years playing defense on problems they could have avoided.

Or you can give your people the tools they're already reaching for, on infrastructure you actually control, with audit trails that hold up in court.

That's not a technology decision. It's a risk management decision. And if you're reading this, you already know which way it goes.

If you want to talk about what this looks like for your firm, reach out. Thirty minutes. No slide deck. Just a conversation about what fits.