The EU AI Act's GPAI chapter does not impose compliance obligations on small teams that use ChatGPT, Claude, or Gemini. Those rules target OpenAI, Anthropic, and Google. But using GPAI models does create obligations in two situations: when you build a product with them that serves as a high-risk AI system, and when you use them internally for consequential decisions about natural persons. Understanding which category you are in determines your actual compliance burden.
At a glance: The EU AI Act GPAI chapter (Articles 51–56) applies to foundation model providers — OpenAI, Anthropic, Google, Meta — not to companies that use those models. Your obligations as a user depend on what you do with the model: deployer obligations (human oversight, individual notice) apply when you use AI for consequential employment, credit, or healthcare decisions; provider obligations (conformity assessment, registration) apply when you build and ship a product that wraps a GPAI model for high-risk use cases.
What the GPAI Chapter Actually Says
EU AI Act Articles 51–56 establish two categories of GPAI model obligations:
All GPAI model providers must:
- Maintain and publish a model card with training data documentation, performance benchmarks, and intended use cases
- Publish a copyright compliance summary covering the training data sources
- Implement policies to comply with EU copyright law (including Article 4 opt-outs for text and data mining)
- Cooperate with the EU AI Office upon request
Providers of systemic risk GPAI models (>10²⁵ FLOPs training compute) additionally must:
- Conduct adversarial testing and red-teaming before and after release
- Report serious incidents to the EU AI Office within specified timeframes
- Implement cybersecurity safeguards proportionate to systemic risk
- Report energy consumption during training and deployment
Who these rules target: OpenAI (GPT-4, GPT-4o), Anthropic (Claude 3 family), Google (Gemini), Meta (Llama 3), Mistral, Cohere, and other foundation model developers. Not the teams that use their models through APIs or subscriptions.
What You Actually Need to Do: The Three Scenarios
Scenario 1: You use ChatGPT, Claude, or Gemini internally for productivity
Use cases: Drafting documents, coding assistance, summarizing research, writing emails, building internal knowledge bases.
Your EU AI Act obligations: None under the GPAI chapter. These uses do not fall under Annex III high-risk categories. You are a deployer of limited-risk AI.
What you should still do:
- Check that your vendor agreement covers EU data processing (DPA signed)
- If prompts contain personal data, confirm processing complies with GDPR (Article 6 lawful basis)
- If employees believe AI output has significant effects on them (performance reviews, task assignments), ensure you have a GDPR Article 22 policy
Bottom line: Standard productivity use of GPAI tools via subscription or API is not a compliance event under the EU AI Act GPAI chapter.
Scenario 2: You use a GPAI model to make or assist consequential decisions
Use cases: AI-assisted resume screening, AI-generated credit risk summaries, GPAI model used to generate medical recommendations, AI used to assess employee performance.
Your EU AI Act obligations: Deployer obligations under Article 26 for high-risk AI systems.
Even when you use a third-party GPAI model (rather than your own model), if your workflow uses that model for high-risk decisions (Annex III categories), you are a deployer of a high-risk AI system. The fact that the underlying model is general-purpose does not change the high-risk classification — the classification follows the use case.
What this means in practice:
| If you use a GPAI model for… | Your obligation |
|---|---|
| Writing job descriptions | None (not a decision about natural persons) |
| Scoring or ranking job applicants | Deployer of high-risk AI (Annex III, Section 4) |
| Generating loan application summaries | Deployer of high-risk AI (Annex III, Section 5) |
| Suggesting clinical treatment options | Deployer of high-risk AI (Annex III, Section 6) |
| Generating performance review summaries | Deployer of high-risk AI (Annex III, Section 4) |
| Answering customer support questions | Generally not high-risk unless it determines access to services |
Deployer obligations for high-risk use:
- Obtain the GPAI provider's documentation confirming the model is suitable for your use case
- Implement human oversight — the human must review and can override the AI output before the decision is final
- Provide individual notice to affected persons that AI was used and why
- Document your deployment-specific risk assessment
- Implement a process for individuals to request human review of AI-assisted decisions
The human oversight requirement is not satisfied by having a human somewhere in the process. The human must have meaningful access to the AI's reasoning, the underlying data, and a genuine ability to override. A human who routinely approves AI recommendations without review is not implementing human oversight — it is implementing automation bias.
Scenario 3: You build a product that wraps a GPAI model for customer use
Use cases: SaaS company builds an AI resume screener on top of GPT-4 API; HR tech startup wraps Claude for candidate assessment; lending platform uses Gemini for credit risk scoring that customers deploy.
Your EU AI Act obligations: Provider (developer) obligations under Article 16 for high-risk AI systems. This is the heaviest category.
When you take a GPAI model and build a product around it that customers use for high-risk decisions, you are the provider of the high-risk AI system — not OpenAI or Anthropic. The foundation model provider is not responsible for how you deploy their model.
Provider obligations:
- Conduct a conformity assessment before placing the system on the EU market
- Produce technical documentation (training data description, performance metrics, known limitations, bias testing results)
- Sign and publish an EU Declaration of Conformity
- Register the system in the EU AI database
- Produce instructions for use — what your customers need to implement human oversight
- Implement post-market monitoring in real-world deployment
- Notify customers of significant model changes
This applies even though your underlying model is a third-party GPAI model. You cannot delegate your provider obligations to OpenAI or Anthropic — you must conduct your own conformity assessment for your specific use case, population, and deployment context.
What to Ask Your GPAI Model Provider
If you are building a product on top of a GPAI API, ask these questions before deploying for high-risk use cases:
Technical documentation:
- Can you provide documentation of the model's training data characteristics (sources, coverage, known gaps)?
- What are the model's known limitations and failure modes for my specific use case?
- Do you have demographic performance data (accuracy by age, gender, or ethnicity) for relevant tasks?
Compliance documentation:
- Do you provide documentation that supports my EU AI Act conformity assessment?
- Is there a EU Declaration of Conformity for the model?
- How will you notify me of significant changes that affect my conformity assessment?
Data processing:
- Is our API data used to train your models? (GDPR implications)
- Where is our data processed? (EU data residency for regulated sectors)
- Do you have a signed DPA available?
Most major GPAI providers (OpenAI, Anthropic, Google) have published usage policies and data processing addendums. The conformity assessment question is more complex — they provide model documentation, but you remain responsible for assessing your specific deployment.
The Open-Source GPAI Exemption
EU AI Act Article 53(2) provides an exemption for open-source GPAI model releases: providers that release GPAI model weights publicly under open licenses are exempt from most GPAI chapter obligations, provided the model does not pose systemic risk.
What this means for small teams:
If you fine-tune and internally deploy an open-source model (Llama 3, Mistral, Falcon), the GPAI provider exemption applies to Meta or Mistral — not to you. Your deployer or provider obligations still apply based on what you do with the model.
If you release a fine-tuned open-source model publicly (put the weights on HuggingFace), you may take on GPAI provider obligations for your release if your model does not qualify for the exemption (e.g., it poses systemic risk or you are not releasing weights openly).
For most small teams, open-source GPAI exemptions are relevant as background context — the reason some foundation models are freely available — not as a compliance pathway for your own obligations.
Practical EU AI Act GPAI Checklist
If you use GPAI tools internally for productivity:
- Confirm vendor DPA covers your EU data processing
- Check that prompts containing personal data have a GDPR Article 6 lawful basis
- Add GPAI tools to your AI tool inventory
If you use GPAI models for consequential decisions:
- Classify the use case against Annex III (employment, credit, healthcare, education)
- Implement human oversight before decisions are finalized
- Add individual notice to affected persons
- Request the model provider's documentation of limitations for your use case
- Document your deployment-specific risk assessment
If you build products on GPAI APIs for high-risk customer use cases:
- Conduct your own conformity assessment (you cannot rely on the API provider's)
- Produce technical documentation for your specific deployment context
- Sign an EU Declaration of Conformity
- Register in the EU AI database before EU market deployment
- Produce instructions for use for your enterprise customers
- Update your contracts to include AI Act compliance provisions and change notification
References
- EU AI Act — Articles 51–56: GPAI model obligations
- EU AI Act — Article 3(63): Definition of general-purpose AI model
- EU AI Act — Article 3(65): Definition of general-purpose AI model with systemic risk
- EU AI Act — Article 53(2): Open-source exemption for GPAI models
- EU AI Office — GPAI Code of Practice (2026 draft)
- Related: EU AI Act Compliance for Small Teams: Complete Guide — full framework covering all Annex III categories and obligations
- Related: EU AI Act for SaaS Companies: Developer vs Deployer — when building on GPAI APIs creates provider obligations
- Related: Privacy-First AI APIs — choosing model providers that don't train on your data
