Most SaaS companies have EU AI Act obligations in two distinct roles simultaneously — and confusing them leads to missing compliance steps that belong to each. As a developer (provider), you build AI features that your customers deploy. As a deployer, your own team uses AI tools that affect your employees and the people you make decisions about. The August 2026 enforcement date applies to both roles.
At a glance: If your product includes AI that customers use for consequential decisions (employment, credit, healthcare), you are a provider with documentation and conformity assessment obligations. If your team uses AI internally for hiring, scoring leads, or managing customer access, you are a deployer with human oversight and individual notice obligations. Many SaaS teams have both simultaneously — and the compliance steps are different for each role.
The Developer Role: When Your Product Is the AI System
Under the EU AI Act, a provider (developer) is any entity that develops an AI system or has an AI system developed and places it on the market. For SaaS companies, this applies when:
- Your product includes AI features that analyze user data and produce recommendations or decisions
- Your customers use those AI features to make consequential decisions about natural persons (their employees, their customers, their loan applicants)
- The AI feature falls into a high-risk category under Annex III
Common SaaS AI features that create high-risk provider obligations:
| SaaS Category | AI Feature | EU AI Act Annex III Category |
|---|---|---|
| HR tech | AI resume screening, candidate scoring | Employment (Section 4) |
| Fintech / lending | AI credit risk scoring, loan decisioning | Financial services (Section 5) |
| Insurance tech | AI risk assessment, pricing models | Financial services (Section 5) |
| Healthcare SaaS | Clinical decision support, triage tools | Safety components — medical (Section 6) |
| Education tech | AI assessment scoring, admissions tools | Education (Section 3) |
| Housing / property | AI rental eligibility, tenant screening | Essential services (Section 5) |
What providers must produce:
-
Technical documentation — description of the AI system, its intended purpose, training data characteristics, performance metrics, and known limitations. This is the document your customers need to satisfy their own deployer obligations.
-
EU Declaration of Conformity — a formal statement that the high-risk AI system meets EU AI Act requirements. Your enterprise customers will request this as part of their own AI vendor due diligence. Without it, they cannot legally deploy your AI for high-risk purposes after August 2026.
-
EU AI database registration — high-risk AI systems must be registered in the EU AI public database before being placed on the EU market.
-
Instructions for use — documentation telling deployers (your customers) how to implement appropriate human oversight, what populations the system was tested on, and what use cases the system is not suited for.
-
Post-market monitoring — ongoing collection of performance data in real-world deployment conditions. Your enterprise customers are part of your post-market monitoring supply chain — their deployment data informs your post-market obligations.
What providers do NOT need to do for each customer:
Providers are not responsible for implementing human oversight at the customer level — that is the deployer's obligation. You are responsible for making it possible: the AI system must be designed so that a human can intervene, review, and override. If your product makes human oversight technically impossible, you have a design compliance problem.
The Deployer Role: When Your Team Uses AI
Your company also has deployer obligations for every AI tool your own team uses internally. This is separate from what you build — it covers what you use.
Common internal AI uses that create deployer obligations:
- Hiring: AI resume screening, AI video interview analysis, AI candidate scoring → Annex III Section 4 (Employment)
- Sales intelligence: AI lead scoring that determines who gets sales resources → may be Section 5 (Financial services) if it affects credit or access to services
- Customer success: AI churn prediction used to determine which customers get support resources → if it materially affects customer access to services, may be in scope
- Internal HR: AI performance review tools, AI-assisted promotion decisions → Annex III Section 4 (Employment)
What deployers must do for high-risk AI tools:
- Obtain the provider's EU Declaration of Conformity before deployment
- Implement human oversight — affected individuals must be able to request human review; that human must have access to the relevant data and reasoning
- Provide individual notice — tell affected individuals that AI was used and why
- Document deployment context — your use of the AI may differ from the provider's tested context; document any deployment-specific risks
- Monitor for serious incidents — report to the provider when the AI produces a serious unexpected outcome
The Dual Role: When Both Apply Simultaneously
A mid-size HR tech SaaS company illustrates the dual role clearly:
-
As a provider: The company's AI resume screening feature is used by enterprise HR teams to make hiring decisions about hundreds of thousands of applicants. The company must produce conformity documentation, maintain technical documentation, and register the system in the EU AI database.
-
As a deployer: The company uses AI tools internally to hire its own engineers and customer success managers. It must obtain conformity documentation from those AI vendors, implement human review for its own applicants, and provide individual notice when AI was involved in hiring decisions.
Both sets of obligations run simultaneously. The compliance team that handles provider documentation for the product is not automatically handling the deployer obligations for internal HR AI — those require separate attention.
What Changes When You Substantially Modify an AI System
EU AI Act Article 25 creates an important rule for SaaS companies that allow customer customization: if a deployer substantially modifies a high-risk AI system, they become a provider for the modified version.
"Substantial modification" includes:
- Retraining or fine-tuning the AI on new data that changes its behavior
- Changing the intended purpose in a way that creates new risks
- Integrating the AI into a workflow that changes who is affected or how decisions are made
Practical implications:
If your SaaS product allows enterprise customers to train the AI on their own data (fine-tuning, custom models), those customers may become providers for their modified version — with all the documentation and registration obligations that entails.
Your product terms of service and documentation should:
- Clearly define what constitutes substantial modification
- State that substantially modified versions create provider obligations for the customer
- Provide documentation that customers can use as a starting point for their own conformity assessment
If your product does not allow substantial modification (it is a fixed model with only configuration options), document this clearly — it keeps your customers in the deployer role only.
The Conformity Assessment: What SaaS Developers Must Produce
For most Annex III high-risk AI systems (the employment, financial services, healthcare, and education categories that affect most SaaS companies), the EU AI Act allows self-assessment rather than requiring an independent third-party audit.
Self-assessment for a SaaS provider means producing and retaining:
Technical file contents:
- System description and version control
- Intended purpose and geographic/demographic scope
- Training and validation data description (types, sources, preprocessing)
- Performance metrics: accuracy, precision/recall, F1, AUC by demographic subgroup
- Known limitations and failure modes
- Bias testing methodology and results
- Risk management process documentation
Declaration of Conformity: A signed document stating that the named AI system meets EU AI Act requirements, listing the relevant Annex III categories, and identifying who is responsible for the declaration.
This documentation does not need to be public — it must be available to the EU AI Office upon request and to your enterprise customers who require it for their own deployer obligations.
Provider Obligations to Your Deployer Customers
When a SaaS company sells AI-powered software to enterprise customers who use it for consequential decisions, the SaaS company has affirmative obligations to those deployers:
-
Provide the information they need to implement human oversight — they cannot implement effective human oversight of your AI without understanding how it works, what its limitations are, and what failure modes to watch for.
-
Disclose known biases — if your AI performs significantly differently across demographic groups (accuracy gaps, false positive rate disparities), your enterprise deployers need this information to assess whether deployment is appropriate for their specific population.
-
Notify of significant changes — if you update the AI system in a way that materially changes its behavior, you must notify deployers so they can reassess their deployment.
-
Cooperate with incident reporting — if a deployer reports a serious incident, cooperate with their investigation. This is not optional — the deployer's post-market monitoring obligations include working with the provider.
In practice, this means your enterprise contracts and product terms should include:
- Explicit AI Act compliance representations
- Change notification obligations
- Incident reporting cooperation clauses
- Data access provisions for post-market monitoring
EU AI Act SaaS Compliance Checklist
As a Provider (Developer)
- Identify all AI features in your product that fall into Annex III high-risk categories
- Complete conformity assessment for each high-risk AI feature
- Produce technical documentation file (see contents above)
- Sign EU Declaration of Conformity for each high-risk AI feature
- Register high-risk systems in EU AI database before EU market deployment
- Produce "instructions for use" documentation for deployer customers
- Implement bias testing across relevant demographic subgroups; document results
- Design human oversight capability into the AI system (not just as a policy)
- Establish post-market monitoring process
- Update enterprise contracts to include AI Act compliance provisions
As a Deployer (Internal AI tools)
- Inventory all AI tools used internally in your team (HR, sales, finance, operations)
- Classify each against Annex III high-risk categories
- Obtain EU Declaration of Conformity from each high-risk AI vendor
- Implement human oversight for each high-risk deployment
- Add individual notice to workflows affecting employees or customers
- Document deployment-specific risk assessment
Clarify Substantial Modification Rules
- Identify whether your product allows customers to train or substantially modify the AI
- Document what constitutes substantial modification in your product terms
- Update product documentation to clarify when customers assume provider obligations
References
- EU AI Act — Article 3: Definitions (provider, deployer, provider vs deployer distinction)
- EU AI Act — Article 25: Responsibilities along the AI value chain (substantial modification)
- EU AI Act — Article 26: Obligations of deployers of high-risk AI systems
- EU AI Act — Annex III: High-risk AI system categories
- European AI Office — AI Act Implementation FAQ (2026)
- Related: EU AI Act Compliance for Small Teams: Complete Guide — the full framework covering all Annex III categories, conformity assessment requirements, and registration
- Related: AI Vendor Due Diligence Checklist — the 30 questions to ask your AI vendors (and what your enterprise customers will ask you)
- Related: HR AI Governance: EU AI Act and EEOC Requirements — Annex III Section 4 employment obligations in detail
- Related: AI Governance for Small Teams: Complete Guide — master governance framework covering both provider and deployer obligations
