Healthcare is the sector where AI governance failures have the highest stakes. A misconfigured AI tool that receives Protected Health Information without a Business Associate Agreement creates HIPAA liability, potential HITECH breach notification obligations, and civil monetary penalties that scale with negligence.
"Is ChatGPT HIPAA compliant?" is one of the most-searched AI questions among healthcare teams. The short answer is: not by default, and not in its consumer form. This guide covers exactly what healthcare startups need to use AI legally and how to build a minimum-compliant setup.
The BAA Requirement — Why Most AI Tools Are Off-Limits
HIPAA requires that any "business associate" — a vendor that creates, receives, maintains, or transmits Protected Health Information on your behalf — sign a Business Associate Agreement with you before accessing PHI.
PHI includes any combination of health information with identifiers: name, date of birth, email address, phone number, address, insurance ID, or any other identifier that could link the health information to a specific person. A patient's diagnosis alone is not PHI. A patient's diagnosis plus their date of birth is PHI.
When an employee submits a message to an AI tool, that message is transmitted to the AI vendor's servers for processing. If the message contains PHI, the AI vendor is receiving PHI — and becomes a business associate who requires a BAA.
Without a BAA, using an AI tool with PHI is a HIPAA violation, regardless of the vendor's security practices.
Which AI Tools Have a BAA Available
| AI Tool | BAA Available | Notes |
|---|---|---|
| ChatGPT.com (Free, Plus, Team) | No | Never use with PHI |
| ChatGPT Enterprise | Yes (OpenAI BAA) | Requires enterprise contract |
| OpenAI API | Yes (OpenAI BAA) | Enterprise customers only |
| Claude.ai (all plans) | No | Never use with PHI |
| Claude via AWS Bedrock | Yes (AWS HIPAA BAA) | Bedrock must be in HIPAA-eligible services list |
| Azure OpenAI Service | Yes (Microsoft HIPAA BAA) | Covered under Microsoft's enterprise BAA |
| Google Cloud Vertex AI | Yes (Google Cloud HIPAA BAA) | Requires Google Cloud agreement |
| AWS Bedrock | Yes (AWS HIPAA BAA) | AWS standard HIPAA BAA covers Bedrock |
| Google Gemini (consumer) | No | Never use with PHI |
| Microsoft 365 Copilot | Yes (Microsoft HIPAA BAA) | Healthcare organizations with Microsoft BAA |
| Nuance Dragon Medical | Yes | Purpose-built for clinical documentation |
The pattern is consistent: consumer products from any vendor have no BAA and cannot be used with PHI. Enterprise API and cloud platform access — where you sign a formal agreement — is where BAAs are available.
What Counts as PHI in an AI Prompt
Employees often assume that removing a patient's name from a prompt makes it safe. This is incorrect. PHI includes any of 18 HIPAA-defined identifiers when combined with health information:
- Patient name
- Geographic data smaller than state (address, city, zip code)
- Dates (other than year): birth date, admission date, discharge date
- Phone numbers
- Fax numbers
- Email addresses
- Social Security numbers
- Medical record numbers
- Health plan beneficiary numbers
- Account numbers
- Certificate or license numbers
- VINs and serial numbers
- Device identifiers
- Web URLs
- IP addresses
- Biometric identifiers (fingerprints, voice prints)
- Full-face photos or comparable images
- Any other unique identifier
Practical rule for AI prompts: If a human could identify a specific patient from the information in the prompt, it is PHI. "A 47-year-old male with Type 2 diabetes in Austin, TX admitted on March 14" is PHI even without a name.
Instruct employees: when using AI for clinical summarization, documentation, or coding — use only de-identified data or use a BAA-covered tool configured for PHI.
The EU AI Act: Healthcare AI Is High-Risk by Default
Healthcare startups building AI products (not just using productivity AI) face an additional regulatory layer: the EU AI Act classifies certain healthcare AI as high-risk.
High-risk healthcare AI under Annex III includes:
- AI intended to support clinical diagnosis
- AI used for triage (prioritizing patients by urgency)
- AI systems that influence treatment or medication decisions
- Medical devices that incorporate AI (also subject to EU MDR/IVDR)
High-risk AI systems are not banned — they are permitted with stricter requirements:
| Requirement | What It Means in Practice |
|---|---|
| Conformity assessment | Self-assessment or third-party audit depending on risk level |
| Technical documentation | Architecture, training data, accuracy benchmarks on file |
| Human oversight | A qualified human must be able to override AI decisions |
| Transparency | Users must know they are interacting with AI |
| Accuracy and robustness testing | Testing across relevant patient populations |
| EU database registration | Register before deployment in the EU |
| Post-market monitoring | Ongoing performance monitoring after deployment |
The deadline for high-risk AI systems under the EU AI Act is August 2, 2026. Healthcare startups deploying diagnostic or triage AI in the EU must be compliant by then.
If your AI product is integrated into a medical device regulated under the EU Medical Device Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR), additional conformity requirements from those regulations apply alongside the EU AI Act.
The Minimum Compliant Setup for a Healthcare Startup
If your healthcare startup uses AI tools for internal operations (documentation, coding, research) — not clinical decision-making — here is the minimum governance setup:
1. BAA with your AI vendor. For any tool that will touch PHI. If you use Azure OpenAI, ensure your Microsoft agreement includes the HIPAA BAA. If you use AWS Bedrock, confirm Bedrock is listed as a HIPAA-eligible service in your BAA.
2. PHI boundary documentation. Write down which AI tools may receive PHI (those with BAAs) and which may not (all consumer tools). Include this in your AI acceptable use policy and your HIPAA policies and procedures.
3. Employee training. Add AI-specific content to your HIPAA training: what counts as PHI in a prompt, which tools have BAAs, and what to do if PHI is accidentally submitted to a non-BAA tool.
4. Incident response plan for AI-related breaches. If an employee submits PHI to a non-BAA AI tool, this is a potential breach event. Your existing HIPAA breach notification process should cover AI-triggered disclosures — confirm it does.
If your startup is building AI-powered clinical tools for EU deployment, start your conformity assessment now. August 2026 is not far, and conformity assessments for high-risk AI take months.
Using AI in a healthcare context? Start with the Compliance Quiz to see which specific regulations apply to your team, then use the AI Risk Assessment to rate your AI use cases from Low to Critical.
