Healthcare AI sits at the intersection of three regulatory frameworks simultaneously: HIPAA governs any AI that processes patient health information, the EU AI Act classifies clinical decision support as high-risk, and FDA rules apply when AI influences a clinical decision. For small healthcare teams and health tech startups, understanding where these frameworks overlap — and where they conflict — is the first compliance step.
At a glance: Any AI processing PHI needs a signed HIPAA BAA with the vendor. Clinical decision support AI that informs (not autonomously makes) diagnoses is generally not FDA-regulated, but AI that makes near-autonomous clinical decisions is Software as a Medical Device (SaMD). EU AI Act high-risk classification applies to clinical AI affecting EU patients, triggering conformity assessment, human oversight, and registration requirements. The most common small-team violation: using a consumer AI tool with patient data without a BAA.
The Three Regulatory Layers
Healthcare AI is uniquely complex because it triggers multiple regulatory frameworks that were designed independently and are not fully harmonized.
| Framework | Trigger | Primary Obligation |
|---|---|---|
| HIPAA | AI processes Protected Health Information (PHI) | BAA with vendor, PHI data controls, minimum necessary standard |
| FDA SaMD | AI makes or informs clinical decisions | 510(k) clearance or De Novo review for high-risk SaMD; post-market surveillance |
| EU AI Act | AI used in safety components of medical products or clinical decision support affecting EU patients | High-risk conformity assessment, CE marking, EU AI database registration |
| EU MDR/IVDR | AI classified as a medical device in EU | Technical documentation, clinical evaluation, notified body involvement |
A clinical AI tool may trigger all four simultaneously. A small health tech startup building an AI symptom checker for EU patients faces: HIPAA (if US patients or US operations), EU AI Act (clinical decision support), and EU MDR (if classified as a medical device).
HIPAA: What AI Vendors Need Before They Touch Patient Data
HIPAA's Business Associate rule applies to any vendor — including AI vendors — that creates, receives, maintains, or transmits PHI on behalf of a covered entity or business associate.
What counts as PHI in an AI context
PHI is individually identifiable health information. In AI prompts and outputs, this includes:
- Patient names combined with any health information
- Medical record numbers
- Dates of service, admission, or discharge combined with other identifiers
- Symptoms, diagnoses, treatment plans attributed to an identifiable person
- Insurance claim data
- Any AI output that could be linked back to an individual patient
Pseudonymized patient data that could be re-identified is still PHI under HIPAA. Replacing a patient name with "Patient A" in a prompt while including DOB, diagnosis, and zip code is still PHI.
BAA Checklist for AI Vendors
Before sending any PHI to an AI API, verify:
- Vendor explicitly offers a HIPAA Business Associate Agreement
- BAA has been executed (signed by both parties) — not just acknowledged in terms of service
- BAA covers the specific AI service being used (not just other products the vendor offers)
- BAA specifies permitted uses: processing PHI only for the purposes you define, not for training
- BAA includes breach notification requirements (60-day window under HIPAA)
- Vendor confirms PHI is not used to train or improve their AI models
- Data retention and deletion terms are specified
AI vendors offering HIPAA BAAs: Anthropic (enterprise), Microsoft Azure OpenAI (via Microsoft BAA), Google Cloud Vertex AI (via Google Cloud BAA). OpenAI API offers a BAA for enterprise customers — confirm with your account team.
Consumer products that cannot be used with PHI: Claude.ai (free/pro), ChatGPT (free/Plus), Google Gemini (consumer). These products do not offer BAAs.
Minimum Necessary Standard in AI Prompts
HIPAA requires that PHI be disclosed only to the minimum extent necessary for the purpose. In AI prompts, this means:
- Do not include patient identifiers in the prompt unless the AI needs them to complete the task
- Replace identifiers with tokens where possible (send "Patient ID: 12345" not "John Smith, DOB: 01/15/1972")
- Map tokens back to identifiers after receiving the AI response in your system
- Do not include the full medical record if the AI only needs the diagnosis history
FDA: When Does Clinical AI Become a Medical Device?
The FDA regulates Software as a Medical Device (SaMD) — software that is intended to be used for a medical purpose without being part of a hardware device.
The Clinical Decision Support Distinction
The 21st Century Cures Act (2016) created a carve-out for clinical decision support software that is NOT regulated as a medical device when it meets all four criteria:
- Not intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device
- Displays, analyzes, or prints medical information that is generally used for educational or reference purposes
- Supports or provides recommendations to a healthcare professional about prevention, diagnosis, or treatment of a disease or condition
- The clinician can independently review the basis for the recommendation — meaning the software displays the underlying data and reasoning, not just the conclusion
If all four are met: generally not regulated as a medical device.
If the fourth criterion fails — if the AI gives a recommendation without displaying the reasoning in a way the clinician can meaningfully evaluate — it may be regulated as SaMD.
FDA SaMD Risk Classification
FDA classifies SaMD on two axes: state of the healthcare situation (critical, serious, non-serious) and significance of information (drive clinical management, inform clinical management, treat or diagnose).
| Significance | Critical Situation | Serious Situation | Non-Serious |
|---|---|---|---|
| Treat or diagnose | Class III (highest) | Class II | Class I |
| Drive clinical management | Class II | Class II | Class I |
| Inform clinical management | Class II | Class I | Class I |
Class III requires PMA (Pre-Market Approval) — the most stringent pathway. Class II requires 510(k) clearance or De Novo. Class I is exempt from premarket notification.
Examples:
- AI that analyzes an ECG and reports "atrial fibrillation detected" to a cardiologist who reviews the tracing: may qualify for CDS carve-out if cardiologist can review the underlying data
- AI that sends a push notification to a patient: "Your readings suggest you should go to the ER now": likely SaMD — makes clinical recommendations to non-clinicians
- AI documentation assistant that converts voice notes to structured SOAP notes: generally not SaMD
EU AI Act: High-Risk Classification for Clinical AI
Under the EU AI Act, AI systems that are safety components of medical devices or intended for use in medical diagnosis, treatment, or patient management are high-risk. AI systems classified as medical devices under EU MDR/IVDR are automatically in scope.
Conformity Assessment Requirements
For a high-risk clinical AI system:
- Risk management system — ongoing process to identify, analyze, and mitigate risks
- Data governance — training data must be relevant, representative, free from errors, and appropriate for the intended purpose
- Technical documentation — detailed description of the system, training methodology, performance metrics, and limitations
- Transparency and human oversight — the system must be interpretable to the degree necessary for human oversight; clinicians must be able to override
- Accuracy and robustness — documented performance metrics, including performance across demographic subgroups
- Post-market monitoring — ongoing data collection on real-world performance
EU AI Database Registration
High-risk AI systems must be registered in the EU AI public database (euaidb.eu) before being placed on the market or put into service. Registration includes: AI system description, intended purpose, geographic scope, and responsible party contact.
Practical Implementation: Small Healthcare Team Checklist
For a small clinical practice, health tech startup, or digital health company:
Immediate (before using any AI with patient data):
- Identify every AI tool that touches PHI — including documentation tools, transcription tools, clinical decision support tools, and coding/billing AI
- Verify BAA status for each: either obtain a signed BAA or remove PHI from all use cases for that tool
- Check if any AI tool processes PHI without your knowledge (shadow AI in clinical workflows)
Within 30 days:
- Classify each clinical AI tool against FDA SaMD criteria — does the clinician review the reasoning (CDS carve-out) or only the conclusion (potential SaMD)?
- Assess EU AI Act applicability — are any EU patients in scope?
- Run an AI risk assessment for each clinical AI tool
- Document minimum necessary standard protocols for AI prompts containing patient data
Before any new clinical AI deployment:
- Conduct AI vendor due diligence including HIPAA-specific questions
- Obtain vendor documentation: BAA, privacy policy, data retention terms, training data opt-out confirmation
- For EU AI Act high-risk tools: obtain vendor's EU Declaration of Conformity
- Implement clinician override mechanism for any AI clinical recommendation
References
- FDA — Software as a Medical Device (SaMD): fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd
- FDA — 21st Century Cures Act CDS guidance
- HHS — HIPAA Business Associate guidance: hhs.gov/hipaa/for-professionals/privacy/guidance/business-associates
- EU AI Act — Annex III Section 5 (Safety components in medical products)
- EU MDR (Regulation 2017/745) and IVDR (Regulation 2017/746)
- Related: Privacy-First AI APIs — which AI API vendors offer BAAs and what their PHI data handling commitments look like
- Related: AI Vendor Due Diligence Checklist — use Section 3 (healthcare data) for clinical AI vendor assessment
- Related: AI Risk Assessment for Small Teams — use for clinical AI risk scoring before deployment
- Related: AI Governance for Small Teams: Complete Guide — full framework covering all sectors, with the master implementation checklist
