HIPAA and AI: Can Healthcare Professionals Use ChatGPT?
Healthcare professionals are increasingly turning to AI tools to help with clinical documentation, patient communication, research summaries, and administrative tasks. The efficiency gains are real and significant. But so is the HIPAA risk — and most healthcare workers don't fully understand where the line is.
The short answer: using ChatGPT or Claude with patient information, in their standard consumer forms, is almost certainly a HIPAA violation. Here's why, and what you can do about it.
What Makes PHI So Sensitive Under HIPAA
Protected Health Information (PHI) is defined broadly under HIPAA to include any information that relates to a patient's health condition, treatment, or payment, combined with any information that could identify the individual. This includes obvious identifiers like names, dates of birth, and social security numbers, but also less obvious ones like geographic data, dates of service, and medical record numbers.
HIPAA's Privacy Rule restricts how PHI can be used and disclosed. Sharing PHI with a third party — including an AI company — generally requires either patient authorisation or a Business Associate Agreement (BAA).
The BAA Requirement
A Business Associate Agreement is a contract between a covered entity and a business associate that handles PHI on their behalf. If an AI tool processes patient data as part of your workflow, the AI company is arguably a business associate — and you need a BAA with them.
What Healthcare Professionals Are Actually Doing
In practice, many healthcare professionals are using AI tools in ways that clearly involve PHI — dictating clinical notes that include patient names, asking AI to draft letters that reference diagnoses, using AI to summarise medical records. Most are doing this without awareness of the HIPAA implications.
This is creating significant compliance exposure. OCR (the HHS Office for Civil Rights, which enforces HIPAA) has been increasing enforcement activity around data security and third-party data sharing. AI tool use is squarely within their area of focus.
The De-identification Solution
HIPAA offers a clear path: de-identified data is not PHI and is not subject to HIPAA's restrictions. If you remove or anonymise all 18 categories of HIPAA identifiers from patient information before it enters an AI tool, you've eliminated the HIPAA risk.
The 18 HIPAA identifiers include names, geographic data, dates, telephone numbers, email addresses, social security numbers, medical record numbers, health plan beneficiary numbers, and others. Removing all of these from a prompt before sending to AI is what makes the transmission HIPAA-compliant.
This is exactly what Snitch does automatically. You work naturally, including patient names and other identifying details. Before anything leaves your browser, Snitch identifies and replaces those identifiers with structured tokens. The AI works with the de-identified prompt. Your browser restores the real values in the response.
Practical Steps for Healthcare Professionals
- Stop using consumer AI tools with patient data immediately if you don't have a BAA in place.
- Assess your current workflow — what patient information is entering AI tools?
- Implement de-identification for any patient data that will be processed by AI.
- Document your approach — OCR responds better to demonstrated compliance efforts.
- Train your team — HIPAA compliance around AI needs to be a team policy, not an individual practice.
HIPAA-safe AI for healthcare.
Snitch automatically de-identifies patient information before it reaches Claude. No BAA needed. No PHI exposure. Full productivity.
Start your free trial →