Healthcare

Is ChatGPT HIPAA Compliant? The Honest Answer

March 19, 2026 · 4 min read · Back to blog

This is one of the most searched questions in healthcare right now — and the answer most people find online is frustratingly vague. So let's be direct.

The verdict
Standard ChatGPT (free and Plus plans) is not HIPAA compliant and cannot be used with patient data. ChatGPT Enterprise may be used with a BAA in place. The safest approach for most healthcare professionals is to anonymise patient data before it enters any AI tool.

Why Standard ChatGPT Isn't HIPAA Compliant

HIPAA compliance for a business that handles Protected Health Information (PHI) on your behalf requires a Business Associate Agreement (BAA). A BAA is a legal contract that obligates the vendor to handle PHI appropriately, implement security safeguards, and report breaches.

OpenAI does not offer BAAs for standard ChatGPT accounts — free or Plus. This means that using ChatGPT with patient data on these plans is a HIPAA violation, full stop, regardless of how careful you are about anything else.

What About ChatGPT Enterprise?

OpenAI offers BAAs for ChatGPT Enterprise customers. Enterprise plans also provide stronger data protection guarantees — OpenAI commits not to use Enterprise conversations for model training, and data is encrypted and isolated. If your organisation signs a BAA with OpenAI for an Enterprise account, you can potentially use ChatGPT with PHI.

However, "potentially" is doing a lot of work in that sentence. A BAA is necessary but not sufficient for HIPAA compliance. You also need to ensure appropriate access controls, audit logging, breach notification procedures, and workforce training. HIPAA compliance is a programme, not a checkbox.

What About Claude?

Anthropic offers enterprise agreements with data protection commitments. For individual healthcare professionals, Claude's standard consumer plans have similar limitations to ChatGPT — no BAA is available, and conversations may be used to improve models.

The Practical Alternative: De-identification

For most healthcare professionals — solo practitioners, small practices, individual clinicians — enterprise AI plans are cost-prohibitive or simply unavailable. The practical alternative is de-identification.

HIPAA's de-identification standard is well-defined: remove all 18 categories of identifiers, and the resulting information falls outside HIPAA's scope entirely. De-identified data is not PHI. De-identified data can be processed by any AI tool without HIPAA concerns.

This is the approach Snitch takes. Patient identifiers are automatically removed and replaced with tokens before your prompt reaches Claude. The AI never sees PHI. HIPAA's restrictions on third-party processing don't apply.

De-identification is not a workaround — it's a HIPAA-sanctioned approach to working with health information without triggering PHI protections. OCR explicitly recognises de-identified data as outside HIPAA's scope.

The Bottom Line

For most healthcare professionals, the practical answer to "is ChatGPT HIPAA compliant?" is: not in a way that's accessible to you. Enterprise plans with BAAs exist but are expensive and complex. The realistic, compliant path for individual practitioners and small practices is de-identification before using any AI tool.

HIPAA-safe AI without enterprise pricing.

Snitch automatically de-identifies patient data before it reaches Claude. No BAA needed. Starts at $35/month.

Start your free trial →