Can Lawyers Use ChatGPT? The Client Confidentiality Problem
Lawyers are under pressure to work faster, write better, and bill more efficiently. AI tools like ChatGPT and Claude promise exactly that — and most attorneys are quietly using them. But there's a problem almost nobody is talking about: every time you paste a client's name, case details, or sensitive information into one of these tools, you may be violating your professional obligations.
This isn't alarmism. It's a straightforward reading of the rules you already know.
What ABA Model Rule 1.6 Says
ABA Model Rule 1.6 requires lawyers to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. Most state bars have adopted substantially similar rules.
The question is whether sending client information to ChatGPT or Claude constitutes such a disclosure. The answer depends on two things: what data you're sending, and what those platforms do with it.
The Real-World Risk
Consider a common scenario: you're drafting a letter for a client in a contentious divorce. You paste their name, their spouse's name, details about assets, and the nature of the dispute into ChatGPT to help you draft the letter faster. In seconds, you have a solid first draft.
But you've also just transmitted confidential client information to a third-party AI provider. That information may be stored. It may be reviewed by humans. It may, in some configurations, be used to train future models.
Several state bar ethics opinions have already addressed this. The New York City Bar Association, the Florida Bar, and others have issued guidance warning that lawyers must conduct due diligence on AI tools before using them with client data.
What "Reasonable Efforts" Actually Means
The standard isn't perfection — it's reasonable efforts. Courts and bar associations consider factors including the sensitivity of the information, the likelihood of disclosure, the cost of safeguards, and the difficulty of implementing protective measures.
This means lawyers aren't necessarily prohibited from using AI tools. But they are required to take reasonable precautions. Those precautions might include:
- Using AI platforms that explicitly commit not to train on your data
- Anonymising client information before sending it to AI tools
- Establishing firm-wide policies on AI use
- Reviewing AI output before sending to clients
The Anonymisation Approach
One practical solution is anonymisation — replacing real client names and sensitive details with placeholders before sending to AI, then restoring them in the response. If the AI never receives real identifying information, you've significantly reduced your exposure.
This is what Snitch does automatically. You type normally, including real client names and case details. Before anything leaves your browser, Snitch replaces identifying information with structured tokens like [NAME_1] and [CASE_REF_1]. The AI works with the anonymised text and returns a response using those tokens. Your browser then restores the real information in the final output.
The result: you get the full benefit of AI assistance without ever transmitting client PII to a third-party server.
What You Should Do Now
- Review your current AI usage. Are you pasting client names or case details into AI tools? If so, you need a policy.
- Check your state bar's guidance. Many bars have issued ethics opinions on AI. Find yours and read it.
- Implement technical safeguards. Whether that's an enterprise AI agreement with data processing terms, or a tool like Snitch that anonymises before sending.
- Document your approach. Being able to demonstrate reasonable efforts matters if you ever face a complaint.
The Bottom Line
AI is going to reshape legal practice. Lawyers who use it effectively will have a significant competitive advantage. But the profession's ethical obligations don't disappear because a new technology is convenient.
The good news: using AI compliantly doesn't have to mean using it slowly or awkwardly. With the right approach, you can have both the productivity gains and the professional protection.
Use AI without the liability.
Snitch anonymises your client data before it reaches Claude — so you get all the productivity with none of the exposure.
Start your free trial →