GDPR and CCPA apply the moment an AI tool processes personal data — which happens the moment an employee pastes a customer name, email, or support ticket into a prompt.
Which AI tools are GDPR-compliant for business data?
| AI Tool | DPA available? | Trains on data? | CCPA service provider? |
|---|---|---|---|
| Claude API (Anthropic) | ✅ Yes — self-serve | ❌ No | ✅ Yes |
| Azure OpenAI Service | ✅ Yes — Microsoft DPA | ❌ No | ✅ Yes |
| Vertex AI (Google Cloud) | ✅ Yes — Google Cloud DPA | ❌ No | ✅ Yes |
| OpenAI API (direct) | ✅ Yes — platform.openai.com/privacy | ❌ No (since Mar 2023) | ✅ Yes |
| Mistral AI API | ✅ Yes — EU-native | ❌ No | ✅ Yes |
| ChatGPT (free/Pro) | ❌ No DPA | ⚠️ May be used for training | ❌ No |
| Claude.ai (free/Pro) | ❌ No DPA | ⚠️ May be used for safety | ❌ No |
| Google AI Studio | ❌ No DPA | ⚠️ Yes by default | ❌ No |
| Grammarly (Business) | ✅ Yes | ❌ No | ✅ Yes |
Rule: if the tool is consumer-tier (free or personal Pro), assume no DPA and do not use it with personal data.
At a glance: Sign a DPA with every AI vendor that handles personal data — free-tier tools almost never offer one. Enable training opt-out on all tools handling proprietary or customer data. Run a Data Protection Impact Assessment before deploying any AI that profiles, scores, or makes automated decisions about individuals. GDPR Article 22 gives EU residents the right to opt out of solely automated decisions with significant effects — AI hiring, credit scoring, and AI pricing all trigger this right.
This guide covers every privacy obligation that applies when small teams use AI: DPA requirements, automated decision-making rules, DPIA triggers, employee data considerations, data subject rights, and what each major AI vendor's privacy terms actually say.
Core Concepts
Personal data (GDPR) / personal information (CCPA): Any data that identifies or can identify a natural person. Names, email addresses, IP addresses, location data, or any combination of data that could single out an individual. Under GDPR this definition is broad — pseudonymized data that could be re-identified still counts.
Data controller: Your organization. You decide why and how personal data is processed. You are responsible for compliance.
Data processor: A vendor that processes data on your instructions. AI tool providers (OpenAI, Anthropic, Google, Microsoft) are processors when you send them personal data to process on your behalf.
Data Processing Agreement (DPA): The contract required between controller and processor under GDPR Article 28. Without one, using an AI tool for personal data is non-compliant.
Special category data: Under GDPR, health data, biometric data, genetic data, racial/ethnic origin, political opinions, religious beliefs, sexual orientation, and trade union membership are "special categories" requiring explicit consent or specific legal bases for processing. AI systems that infer or process any of these categories face stricter requirements.
The Three-Question Test
Before using any AI tool with any data:
1. Does this data include personal information? If yes — names, emails, phone numbers, customer IDs, any data linked to an identifiable person — privacy rules apply.
2. Do we have a DPA with this vendor?
- Enterprise/paid plan, DPA signed: Likely compliant (check training opt-out and retention terms)
- Paid plan, no DPA signed yet: Request one before using personal data
- Free tier: Almost certainly no DPA — do not use personal data
3. Is there a lawful basis for this processing? Under GDPR, you need a documented legal reason to process personal data. For most AI use cases: legitimate interests (genuine business reason, does not override individual rights) or contract performance (necessary to deliver a service the person signed up for). Document your lawful basis in writing.
What GDPR Requires When Using AI Tools
DPA — The Non-Negotiable Starting Point
For any AI vendor processing personal data of EU residents on your behalf, you need a signed DPA. Most major AI vendors offer this on business plans:
- OpenAI: ChatGPT Team, Enterprise, and API — accept the data processing terms in the settings dashboard
- Anthropic (Claude): API usage terms include DPA provisions; enterprise agreements available
- Google (Gemini/Workspace AI): Covered under Google Workspace DPA for business accounts
- Microsoft (Copilot/Azure OpenAI): Covered under Microsoft's Online Services DPA for commercial accounts
Action: Audit every AI tool in your tool register. If it touches personal data and lacks a DPA, either get one or stop using personal data in that tool.
What a DPA Must Cover (Checklist)
A GDPR-compliant DPA with an AI vendor must specify:
- Subject matter and duration of the processing
- Nature and purpose of the processing
- Type of personal data and categories of data subjects
- Processor's obligation to process only on documented instructions
- Confidentiality obligations for persons authorized to process the data
- Technical and organizational security measures (Article 32)
- Conditions for engaging sub-processors (AI vendors typically use sub-processors)
- Assistance with data subject rights requests
- Assistance with security incidents and breach notification
- Deletion or return of data at end of service
- Training opt-out: Confirmation that personal data will not be used to train AI models
- Data transfer mechanisms (SCCs) if data is processed outside the EU
If any of these are missing from your vendor's DPA, request clarification or a supplementary agreement.
Training Opt-Out — The Overlooked Clause
Many AI vendors' default terms allow them to use your inputs to train future models. Under GDPR, using personal data for model training is a new processing purpose requiring either explicit consent or a fresh lawful basis.
Most enterprise plans include a training opt-out. Free tiers typically do not. Confirm training opt-out status for every tool that handles personal data. For a vendor-by-vendor comparison of default training policies and how to opt out, see the privacy-first AI API guide.
Cross-Border Data Transfers
If you are in the EU and your AI vendor processes data in the US, you need a valid data transfer mechanism. The EU-US Data Privacy Framework (2023) covers US companies that have self-certified. Standard Contractual Clauses (SCCs) are the most universally available fallback. Most major AI vendors' DPAs include SCCs — verify this when reviewing the DPA.
GDPR Article 22 — Automated Decisions
GDPR Article 22 is the provision small teams most frequently overlook, and it applies directly to AI-assisted decisions.
What Article 22 says: Data subjects have the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal effects concerning them or similarly significantly affects them.
What "solely automated" means: A human who only sees the AI output (a score, a recommendation, a decision) and rubber-stamps it does not satisfy the "not solely automated" requirement. The human must have the ability and the information to exercise meaningful independent judgment.
What "legal or significant effect" covers:
- Credit approval or denial
- Job application rejection
- Insurance pricing based on individual risk assessment
- Personalized pricing that materially differs from standard pricing
- Content moderation that removes access to a platform
- AI-scored performance reviews that affect employment
When Article 22 applies, you must:
- Inform the individual that an automated decision was made about them — in the privacy notice and at the point of the decision
- Provide meaningful information about the logic used — not technical documentation, but an explanation a non-technical person can understand
- Give the individual the right to request human review of the decision
- Give the individual the right to contest the decision and express their point of view
- Implement the human review — a human who has access to the relevant data and reasoning, not just the AI's output
Article 22 does not prohibit automated AI decisions — it requires safeguards around them. For sector-specific implementations of these safeguards (CFPB adverse action notices for credit decisions, EU AI Act human oversight for hiring AI), see the relevant sector guides.
Data Protection Impact Assessments (DPIA) for AI
A DPIA is required under GDPR Article 35 before any processing that is likely to result in high risk to the rights and freedoms of individuals.
Three situations that automatically require a DPIA:
- Systematic evaluation of personal aspects using automated processing — any AI that scores, profiles, or ranks individuals (credit scoring, AI hiring screening, employee productivity monitoring, AI-driven customer segmentation)
- Large-scale processing of special category data — health data, biometric data, genetic data, racial/ethnic origin, etc.
- Systematic monitoring of publicly accessible areas — facial recognition in public, AI surveillance systems
What a DPIA must cover:
- Description of the processing and its purposes
- Assessment of the necessity and proportionality of the processing
- Assessment of risks to the rights and freedoms of data subjects
- Measures to address those risks (technical and organizational controls)
For small teams deploying third-party AI tools, a DPIA does not mean building a system from scratch — it means documenting why the deployment is necessary, what risks it creates for individuals, and what controls you have implemented. A DPIA for an AI hiring tool can be 3–5 pages covering: what the AI evaluates, the accuracy and bias testing results, the human oversight mechanism, and the candidate notice.
Consult your supervisory authority if a DPIA reveals high residual risks that cannot be mitigated — deployment should be paused until those risks are addressed.
Employee Data and AI
Employee data has specific rules under GDPR and is often handled differently by national implementing laws (Germany, Netherlands, and France have strong works council requirements for employee monitoring).
High-risk employee AI use cases:
| Use Case | Risk | Requirement |
|---|---|---|
| AI productivity monitoring (keystrokes, screenshots, time tracking) | High | DPIA required; legal basis typically employment contract or legitimate interests with necessity test; works council consultation in many EU jurisdictions |
| AI-assisted performance reviews | High | Article 22 applies if AI output influences employment outcomes without meaningful human judgment |
| AI email/chat analysis for compliance | High | Explicit policy, proportionality requirement, data minimization |
| AI meeting transcription (internal) | Medium | Policy disclosure, retention limits, training opt-out |
| AI writing assistant (no personal data) | Low | Training opt-out preferred |
Works council rights: In many EU member states, introducing AI tools that affect working conditions, monitor employees, or assist in performance evaluation requires works council consultation and potentially co-determination. This is a national law issue — check your jurisdiction before deploying employee-facing AI tools.
Data Subject Rights in the AI Context
GDPR gives individuals specific rights over their personal data. When that data has been processed by AI tools, these rights become more complex to fulfill.
Right of Access (Article 15)
When a data subject requests what data you hold about them, you must include: the categories of data processed, the purpose of processing, any recipients (including AI vendors), and retention periods. If AI processing produced an output about that individual (a score, a decision, a summary), that output is also personal data subject to the access request.
Practical implication: You need to know which AI tools processed the individual's data and what outputs they produced. This requires maintaining logs of AI-assisted decisions that affected specific individuals.
Right to Erasure (Article 17)
If a data subject requests deletion of their data, and that data was sent to an AI vendor, you must also request deletion from the vendor. Most enterprise DPAs include a process for this. Confirm the deletion mechanism before the first DSAR arrives.
AI training data complication: If personal data was used to train or fine-tune an AI model before the deletion request, deletion from the model is technically complex or impossible. This is why training opt-out is essential — preventing data from entering the training pipeline is more practical than deleting it from a trained model.
Right Not to Be Subject to Automated Decisions (Article 22)
As covered above, individuals have the right to request human review of automated decisions with significant effects. Your process for handling this right must be documented and operational before deploying AI decision-making systems.
What CCPA Requires When Using AI Tools
CCPA (California Consumer Privacy Act, as amended by CPRA) applies to for-profit businesses meeting at least one threshold: annual gross revenue over $25 million, buy/sell/share personal information of 100,000+ consumers or households, or derive 50%+ of annual revenue from selling personal information.
Check Whether AI Use Qualifies as a "Sale" or "Share"
Under CCPA/CPRA, "sharing" personal information for cross-context behavioral advertising — or selling it for value — triggers disclosure and opt-out rights. If your AI vendor uses customer data for model training, this may qualify as sharing for CCPA purposes.
Action: Review the vendor's data use terms for CCPA language. If they "share" data, update your privacy policy to disclose it and implement opt-out mechanisms for California residents.
Service Provider Agreements
CCPA has an equivalent to GDPR's DPA — a Service Provider agreement — that restricts how the vendor can use your data. It must prohibit the vendor from using the data for their own commercial purposes, retaining it beyond what is necessary, or sharing it with third parties. Major AI vendors typically include CCPA Service Provider language in their enterprise agreements.
Automated Decision-Making Under CPRA
CPRA (effective 2023) introduced a right to opt out of automated decision-making for California residents, including profiling for decisions with significant effects. This parallels GDPR Article 22. If your AI systems make automated decisions about California residents, implement an opt-out mechanism.
State Privacy Laws Beyond CCPA
Multiple US states have passed privacy laws with automated decision-making provisions:
| State | Law | Automated Decision Right |
|---|---|---|
| Virginia | VCDPA | Opt-out right for profiling with significant effects |
| Colorado | CPA | Opt-out right for profiling with significant effects |
| Connecticut | CTDPA | Opt-out right for profiling with significant effects |
| Texas | TDPSA | Opt-out right for targeted advertising and profiling |
| Oregon | OCPA | Opt-out right for profiling |
If your team operates across multiple US states, implement a general opt-out mechanism for automated profiling decisions — this is simpler than state-by-state compliance.
High-Risk AI Use Cases: Privacy Priority Matrix
| Use Case | GDPR Risk | CCPA Risk | What to Check |
|---|---|---|---|
| AI credit/lending decisions | Very High | High | DPA, Article 22, DPIA, adverse action notice |
| AI hiring screening | Very High | High | DPA, Article 22, DPIA, CCPA opt-out |
| AI customer support (with PII) | High | Medium | DPA, training opt-out, retention |
| AI meeting transcription (customer calls) | High | Medium | DPA, recording consent, retention |
| AI employee productivity monitoring | High | Low | DPIA, legal basis, works council |
| AI analysis of CVs/applications | High | High | DPA, Article 22, DPIA |
| AI chatbot (customer-facing) | High | Medium | DPA, privacy policy disclosure, opt-out |
| AI writing assistant (customer data in prompts) | Medium | Medium | DPA, training opt-out |
| AI code assistant (no personal data) | Low | Low | Training opt-out preferred |
| Internal AI tools (no personal data) | Low | Low | Training opt-out preferred |
Practical Steps: 30-Day Privacy Compliance Sprint
Week 1: Inventory and DPA status
- Audit every AI tool in use — list every tool, whether personal data is processed, and DPA status
- Flag free-tier tools with personal data as immediate compliance gaps
- Request DPAs from vendors where missing — most can provide within days
Week 2: Training opt-out and data transfer 4. Enable training opt-out on all tools handling personal data 5. Verify data transfer mechanisms (SCCs) for EU data transferred to US vendors 6. Check vendor sub-processor lists — your data may pass through multiple parties
Week 3: Automated decisions and Article 22 7. Identify any AI tool that makes decisions about individuals (credit, hiring, pricing, access) 8. For each: implement individual notice, human review mechanism, and opt-out process 9. Document lawful basis and proportionality assessment
Week 4: DPIA and documentation 10. Run DPIA for any AI tool that profiles individuals, processes special category data, or monitors employees 11. Update privacy policy to disclose AI tool vendors as processors 12. Brief team on "no personal data in free-tier AI" rule
AI-Specific Privacy Risks Beyond the Standard Framework
Inference and re-identification: AI models can infer sensitive attributes (health conditions, political views) from seemingly innocuous data. Data that appears non-personal can become personal in combination with an AI's inference capabilities. This is a GDPR risk even when the inputs appear benign.
Prompt injection and data extraction: Malicious content in inputs can trick AI tools into revealing data from other sessions in some architectures. Verify your vendors' security architecture for prompt injection mitigations before processing sensitive personal data.
Model memorization: AI models can memorize and reproduce training data verbatim in some circumstances. If your data was used to train a model before you opted out, fragments could appear in outputs to other users. Training opt-out prevents new exposure but does not retroactively protect already-memorized data.
Fine-tuning with personal data: If you are fine-tuning a model on your proprietary data, the privacy implications are more significant than using a hosted API. The fine-tuned model may memorize personal data from the training set, creating deletion and access challenges. Fine-tuning on personal data requires a DPIA and explicit data governance controls.
For a detailed comparison of which AI API providers offer privacy-protective defaults — no training on your data, EU data residency options, and enterprise DPA terms — see the privacy-first AI API guide.
Privacy Governance Checklist
- AI tool register documents DPA status for every tool handling personal data
- DPA signed with every AI vendor processing personal data (no unsigned free-tier tools with personal data)
- Training opt-out confirmed for all tools handling customer or employee data
- Data transfer mechanisms (SCCs) verified for EU-to-US data flows
- Lawful basis documented for each AI processing activity
- Article 22 assessment completed for any AI making decisions about individuals
- Individual notice implemented for AI-assisted decisions with significant effects
- Human review mechanism operational for affected individuals to request
- DPIA completed before deploying AI that profiles, scores, or monitors individuals
- Privacy policy updated to disclose AI vendor processors
- Employee data AI use cases reviewed with HR/legal for works council obligations (EU)
- Data subject rights process covers AI-processed data (access, deletion, objection)
- Team briefed on personal data handling rules for AI tools
References
- GDPR Article 28 — Data processor obligations and DPA requirements
- GDPR Article 22 — Automated individual decision-making, including profiling
- GDPR Article 35 — Data Protection Impact Assessment
- ICO — UK GDPR Guidance: Artificial Intelligence
- EDPB Guidelines 05/2020 on consent; EDPB Guidelines on automated decision-making
- California Privacy Rights Act (CPRA) — automated decision-making rights
- NIST AI Risk Management Framework — privacy risk considerations
- Related: Privacy-First AI APIs: Which Don't Train on Your Data — vendor-by-vendor comparison of training policies, DPA availability, and EU data residency
- Related: AI Vendor Due Diligence Checklist — full 30-question vendor review including privacy-specific questions (Section 2: data handling)
- Related: HR AI Governance: EU AI Act and EEOC Requirements — Article 22 and EEOC disparate impact for AI hiring tools
- Related: Fintech AI Governance: CFPB, FCRA, and EU AI Act — Article 22 adverse action notices for AI credit decisions
- Related: AI Governance for Small Teams: Complete Guide — full governance framework with privacy as one of five components
This guide covers general principles and is not legal advice. For GDPR or CCPA obligations specific to your organization, consult a qualified legal advisor.
