Most small teams deploying AI in 2026 have three governance gaps in common: no written AI use policy, no vendor DPA on file, and no incident log. These three gaps — not the complexity of regulatory frameworks — are what create enforcement exposure when the FTC, a state attorney general, or an EU regulator comes looking. This benchmark covers ten questions that separate prepared teams from exposed ones, mapped to the specific regulatory obligations that apply to teams of any size in 2026.
At a glance: The nearest hard deadline for US small teams is Colorado AI Act, June 30, 2026 — 60 days. The broadest ongoing exposure is FTC Section 5, active since 2025, targeting deceptive AI claims and undisclosed automated decisions. The EU AI Act's high-risk compliance obligations hit August 2, 2026. None of these require a legal team to prepare for. They require documentation.
Key Takeaways
- Three gaps account for most small-team enforcement exposure: no AI use policy, no vendor DPAs, no incident log.
- Colorado AI Act (June 30) is the nearest active deadline for teams deploying AI in employment, lending, insurance, or education.
- FTC AI enforcement is running on three simultaneous tracks in 2026 — deceptive claims, automated decisions, and AI-generated fake content.
- EU AI Act August 2026 deadline applies to deployers of third-party high-risk AI, not just developers.
- Healthcare, HR, and fintech teams face three simultaneous regulatory frameworks — each with independent obligations.
- Teams that build governance documentation before a deadline negotiate better than teams scrambling after.
The 10 Benchmark Questions
These ten questions map to the specific obligations most likely to generate enforcement exposure in 2026. For each question where your team answers "no," that gap is your highest-priority governance action.
1. Do you have a written AI use policy?
A written AI use policy is the foundation of any governance program. It defines which AI tools are approved, which uses are prohibited, what data each tool can access, and where human review is required. Without it, team members cannot make consistent decisions about AI use, and the organization cannot demonstrate governance intent to a regulator.
The FTC's 2026 enforcement pattern shows that teams with documented policies — even simple one-page policies — have substantially more negotiating leverage in investigations than teams with nothing. A policy does not need to be long. It needs to exist and be accessible to everyone on the team.
What it should cover: approved tools by name, prohibited uses (final legal or financial decisions, customer communications without human review), data handling rules (what data each tool can access), and the escalation path for AI failures.
2. Have you signed a DPA with every AI vendor that handles personal data?
Under GDPR, sending personal data to an AI API without a signed Data Processing Agreement (DPA) is a violation of Article 28 — regardless of whether the vendor trains on your data. Under CCPA, using an AI vendor without a service provider agreement creates "sale of personal information" liability.
The most common mistake: assuming that accepting a vendor's Terms of Service is sufficient. A ToS is not a DPA. A DPA is a separate contract that specifically addresses sub-processors, data retention periods, deletion on request, and training restrictions.
Major providers — Anthropic (Claude API), OpenAI API, Azure OpenAI, Google Vertex AI, Mistral — all offer DPAs. The gap is not availability; it is the team not knowing they need to sign one. Claude API, Azure OpenAI, and Google Vertex AI do not train on your data by default, but you still need the signed DPA for GDPR Article 28 compliance.
3. Do you have an incident log for AI failures?
An incident log is a record of AI failures, unexpected outputs, and near-misses — date, tool, what happened, what action was taken. It is the single most undervalued governance document a small team can maintain.
Without an incident log, patterns are invisible. A language model that hallucinates a legal citation twice a week looks like a random error. Documented, it looks like a systematic risk requiring a human review step. Regulators — FTC, EU supervisory authorities, state AGs — treat the existence of an incident log as evidence of operational seriousness. Its absence is evidence of governance indifference.
The log does not need to be sophisticated. A shared spreadsheet, reviewed monthly, is sufficient. What matters is that it exists before something serious happens.
4. Has each AI tool in your stack been inventoried with data access documented?
An AI tool inventory is a list of every AI tool your team uses — including tools individual members use independently, not just company-licensed software — with the vendor, the model version, the primary use case, and what data the tool can access.
The inventory matters because governance policies cannot cover tools you do not know exist. Shadow AI — employees using unapproved tools on work data — is the most common source of data governance violations in small teams. Hidden AI features in enterprise software tools create the same gap: Slack, Notion, GitHub Copilot, and Google Workspace all have embedded AI that may process work data by default.
The inventory should be updated quarterly, not just at onboarding.
5. Have you classified your AI systems for EU AI Act risk?
The EU AI Act requires deployers to classify AI systems they use against the Annex III high-risk categories. Deployers — teams that use third-party AI tools to make decisions — have independent obligations under Article 26, separate from the developer's obligations.
High-risk categories relevant to small teams: employment and HR management AI (resume screening, interview scoring, performance monitoring), education and vocational training AI, credit scoring and financial services AI, and AI used in essential services. If your team uses AI in any of these domains and it affects EU residents, classification and documentation are required before August 2, 2026.
The classification question is binary: does the system fall under Annex III? If yes, you need technical documentation, a human oversight mechanism, and a conformity assessment. The EU AI Act compliance guide for small teams covers the full classification checklist.
6. Do your AI marketing claims have documented evidence?
Every AI capability claim your organization makes publicly — accuracy rates, bias reduction, speed, decision quality — requires documented evidence from your own deployment before publication. Not evidence from your AI vendor. Evidence from your product.
FTC enforcement in 2026 is running a Section 5 deceptive practices track specifically targeting AI capability inflation. "Our AI is 94% accurate" requires internal testing documentation showing 94% accuracy on your dataset, under your conditions. Vendor benchmarks do not transfer. The burden of proof is on the organization making the claim.
Common high-risk claims that frequently lack substantiation: accuracy percentages, "removes bias" or "eliminates bias," "no false positives," and "instant decisions." Each requires evidence before publication. FTC AI enforcement guidance covers the specific claim categories under active scrutiny.
7. Are your customer-facing AI interactions disclosed?
Under EU AI Act Article 50, users must be told when they are interacting with a chatbot unless it is obvious. AI-generated content must be marked as AI-generated in a machine-readable format. Both obligations have been active since February 2025.
Under FTC guidance, a chatbot that denies being AI when directly asked creates Section 5 deceptive practices liability. This applies to customer service chatbots, AI-assisted sales interactions, and automated email responses that pass as human-authored.
The disclosure requirement has two components: in-product disclosure at the start of AI interactions, and machine-readable marking for AI-generated published content. Both must be implemented regardless of the Omnibus deadline negotiations.
8. Have you tested your HR or credit AI for disparate impact?
Any AI tool that influences decisions in employment, lending, or insurance must be tested for disparate impact across protected classes before deployment. This is not optional — it is a current requirement under EEOC guidance for hiring AI, under ECOA and Regulation B for credit AI, and under EU AI Act for high-risk systems.
Disparate impact testing compares outcomes across demographic groups to identify whether the AI produces systematically different results for protected classes. The analysis does not require a data science team. It requires tracking outcome rates by demographic group and investigating gaps above 4/5ths of the majority group rate (the EEOC's "four-fifths rule").
Employers and lenders remain liable for discriminatory AI outcomes regardless of vendor claims. "The vendor certified it as unbiased" is not a defense. HR AI governance obligations and fintech AI compliance cover the specific testing and documentation requirements.
9. Is one person on your team named as AI governance owner?
AI governance without a named owner does not get maintained. A quarterly review scheduled on no one's calendar does not happen. An incident log with no assigned reviewer does not get reviewed.
The governance owner does not need to be a compliance professional. They need to have the responsibility, the authority to respond to external inquiries within 72 hours, and a quarterly calendar reminder. For a team of five, this is a 30-minute-per-week function. For a team of 50, it is a 20% role.
FTC enforcement pattern in 2026 shows that companies with a named internal contact for AI compliance questions fare measurably better in investigations. Diffuse responsibility — "everyone handles AI questions" — is operationally equivalent to no responsibility.
10. Are your quarterly review meetings scheduled for the year?
A governance policy that is never reviewed stops being a policy. Regulatory requirements change — new enforcement guidance, new deadlines, new vendor terms — and a policy written in January that is never updated is a liability by June.
Four 60-minute review meetings per year, scheduled in advance, are sufficient for most small teams. Each review covers: what changed in the tool inventory, any incidents logged in the quarter, any vendor policy updates, any new regulatory guidance, and any changes needed to the AI use policy.
The quarterly review is also when vendor DPAs should be checked for expiration or renewal clauses, and when the incident log should be reviewed for patterns.
Where Most Small Teams Fall on the Benchmark
Based on the regulatory gap analysis documented across enforcement actions, compliance guidance, and disclosed violations in Q1–Q2 2026, teams typically fall into three groups:
Groups 1–3 checked: The team has some awareness of AI governance but no operational foundation. An AI use policy exists in name, but DPAs are unsigned and no incident log exists. This group faces the highest enforcement exposure because governance intent without documentation provides no protection.
Groups 4–7 checked: The team has an operational governance foundation. Tool inventory is maintained, DPAs are signed with major vendors, and an incident log exists. This group needs to focus on the domain-specific obligations — EU AI Act classification, disparate impact testing, FTC claim substantiation — that most commonly appear in enforcement actions.
Groups 8–10 checked: The team has a mature governance posture. The gap at this level is typically specialist coverage — healthcare, HR, or fintech regulatory frameworks — and freshness of the quarterly review cadence.
The Three Deadlines That Matter in 2026
June 30, 2026 — Colorado AI Act. Applies to any organization deploying AI that makes consequential decisions affecting Colorado residents in employment, lending, insurance, or education — regardless of company location. Required: impact assessment before deployment, disclosure to affected individuals, and appeal rights for adverse AI decisions. Colorado AI Act compliance checklist covers the specific documentation requirements.
August 2, 2026 — EU AI Act high-risk systems. Deployers of Annex III high-risk AI systems must have technical documentation, human oversight mechanisms, and conformity assessments in place. The Digital Omnibus may extend this deadline to December 2027, but negotiations are ongoing. EU AI Act compliance guide covers current requirements. Treat the extension as a bonus, not a plan.
Ongoing — FTC Section 5 enforcement. Active now. No deadline. The FTC's AI enforcement unit expanded in late 2025 and is running three simultaneous tracks. There is no grace period and no minimum company size threshold.
Implementation: Four Weeks to a Baseline Governance Program
Week 1 — Inventory and ownership. List every AI tool in use, including individual-use tools. Document vendor, use case, and data access for each. Name one person as AI governance owner with a quarterly calendar reminder.
Week 2 — Policy and incident log. Write a one-page AI use policy: approved tools, prohibited uses, data rules, human review requirements, escalation path. Create an incident log spreadsheet and share it with the team.
Week 3 — Vendor and DPA review. For each vendor that handles personal data, locate or request their DPA, verify training opt-out settings, and confirm sub-processor list. Flag any vendor that cannot provide a DPA within 14 days.
Week 4 — Communication and scheduling. Run a 30-minute team meeting to walk through the AI use policy and incident log. Schedule four quarterly review meetings for the year. Document the team meeting in the governance log as evidence of communication.
The process takes 10–15 hours spread across one month. After that, the quarterly review is 60 minutes. That is the full operational cost of a baseline AI governance program for a small team.
Checklist
- Written AI use policy exists and is accessible to all team members
- DPAs signed with every AI vendor handling personal data
- AI tool inventory documented with data access noted for each tool
- Incident log created and shared with the team
- One person named as AI governance owner with quarterly calendar reminder
- Customer-facing AI interactions disclosed per Article 50 / FTC guidance
- AI marketing claims reviewed against documented evidence
- EU AI Act Annex III classification completed for each AI tool
- HR / credit AI tested for disparate impact if applicable
- Colorado AI Act impact assessment completed if applicable
- Quarterly review meetings scheduled for the full year
References
- EU AI Act compliance guide for small teams
- FTC AI enforcement actions — April 2026
- Colorado AI Act compliance and June 30 deadline
- Privacy-first AI APIs: which don't train on your data
- HR AI governance: EU AI Act and EEOC hiring requirements
- Fintech AI governance: CFPB, FCRA, and automated credit decisions
- Healthcare AI governance: HIPAA, EU AI Act, FDA SaMD
- Hidden AI features and the governance gap they create
- FTC Act Section 5: unfair or deceptive acts and practices
- EU AI Act, Article 26: obligations of deployers of high-risk AI systems
- Colorado SB 205 (AI Act), effective June 30, 2026
