AI governance for small teams means knowing which AI tools are in use, what data they process, what can go wrong, and who is responsible when it does. For a team of 10–50 people without a dedicated compliance function, governance is five components: a written policy, a risk assessment process, vendor due diligence, monitoring, and sector-specific compliance where your industry requires it.
At a glance: Start with a use-case inventory and an AI acceptable use policy — those two documents cover most governance obligations for small teams. Add vendor due diligence before any new AI deployment. For EU AI Act high-risk categories (hiring, healthcare, credit scoring), conformity documentation was required by August 2026. Shadow AI is the highest-probability risk: employees using unapproved tools with company data, invisible to the governance process.
This guide covers every component of AI governance for small teams with links to the deep-dive guide for each area. Read it end-to-end to understand the full picture, or jump to the section most relevant to your immediate situation.
1. What AI Governance Actually Requires
AI governance is not a policy document sitting in a shared drive. It is an operational process with four ongoing activities:
- Inventory — knowing which AI tools are in use, by whom, and with what data
- Policy — documented rules for approved use, prohibited use, and data handling
- Risk management — categorizing each AI tool by risk level and applying controls proportionate to that risk
- Oversight — human review mechanisms, monitoring, and a cadence for reviewing AI decisions
For small teams, the practical version of this is: one designated AI Governance Lead (does not need to be a full-time role), a documented list of approved tools, a vendor review checklist for new tools, and a monthly 30-minute review meeting.
The full framework — including how to structure each component for a team of 5 to 50 people — is in the AI governance framework guide. That guide walks through the five-component structure with implementation order and time estimates for each step.
2. AI Acceptable Use Policy — Your First Document
The AI acceptable use policy is the foundation of everything else. It answers four questions:
- Which AI tools are approved for work use?
- What data can and cannot be put into AI tools?
- What decisions can AI assist with, and which require human judgment?
- What happens when an AI tool is misused or causes a problem?
What a policy must cover:
| Policy Element | Why It Matters |
|---|---|
| Approved tools list | Without this, employees use whatever they find — creating shadow AI and data exposure |
| Data classification rules | Prevents customer PII, health data, and trade secrets from entering AI tools |
| Output review requirements | Specifies when AI output requires human verification before use |
| Prohibited uses | AI-generated legal documents, autonomous financial decisions, etc. |
| Incident reporting | What employees do if they suspect an AI error or data exposure |
A starting policy does not need to be 20 pages. A one-page document that covers the above five elements is more effective than a detailed policy that no one reads.
The AI policy starter kit provides a template and the rollout sequence that works without a compliance team. For a copy-paste acceptable use policy, the AI acceptable use policy template covers all required elements including data handling, output review, and prohibited use cases.
3. AI Risk Assessment — Matching Controls to Risk
Not all AI tools carry the same compliance obligation. A writing assistant used to draft internal emails is lower risk than an AI system that screens job applicants or processes medical records. Risk assessment maps each AI deployment to the right level of controls.
The four risk levels (EU AI Act / NIST AI RMF framework):
| Risk Level | Examples | Required Controls |
|---|---|---|
| Minimal risk | Spam filters, grammar tools, playlist recommendations | No specific requirements |
| Limited risk | Chatbots, content generators, summarizers | Transparency to users that AI is involved |
| High risk | Hiring AI, credit scoring, clinical decision support, biometric ID | Conformity assessment, bias testing, human oversight, registration |
| Unacceptable risk | Social scoring by public authorities, real-time biometric surveillance | Prohibited |
For each AI tool your team uses, the risk assessment asks: what decisions does this tool influence, who is affected, and what happens if it produces an incorrect or biased output?
The practical process for small teams:
- List every AI tool in use (including embedded AI in SaaS products)
- For each tool, identify: what data it processes, what decisions it influences, and who the affected population is
- Score by risk level using the matrix above
- Assign required controls based on risk level
The AI risk assessment guide includes a scoring matrix, a risk register template, and worked examples for common small-team AI deployments (AI writing tools, AI code assistants, AI customer support, AI analytics).
4. AI Vendor Due Diligence — Before You Sign
Every AI tool you deploy is a vendor relationship. Before deploying any AI tool that touches company or customer data, a vendor review should cover:
The three critical questions:
-
Does the vendor train on your data? Many AI tools use your prompts and data to improve their models by default. For any proprietary or customer data, confirm the vendor has training opt-out enabled for your account — or use a tool that does not train on your data at all.
-
Does the vendor have a Data Processing Agreement (DPA)? Under GDPR, any third party processing personal data on your behalf must sign a DPA. For US teams handling regulated data (healthcare, financial), equivalent agreements apply. Enterprise-tier tools typically offer DPAs; consumer-tier tools typically do not.
-
What happens in a security incident? Your vendor is part of your supply chain. If their AI infrastructure is breached, your data may be exposed. Confirm their breach notification timeline and your response obligations.
For healthcare teams, a fourth critical question applies: does the vendor offer a HIPAA Business Associate Agreement (BAA)? Using any AI tool that processes protected health information (PHI) without a signed BAA is a HIPAA violation.
The AI vendor due diligence checklist provides 30 questions across security, privacy, compliance, and contractual obligations — structured to run in under 30 minutes for standard vendor reviews.
For a comparison of which major AI API providers train on your data and which offer DPAs and BAAs, the privacy-first AI API guide covers Anthropic, OpenAI, Google, Cohere, Mistral, and others with training data policies and enterprise contract terms side-by-side.
5. EU AI Act — The 2026 Compliance Deadline
The EU AI Act is the most significant AI regulation affecting small teams globally. Its high-risk provisions for employment AI, healthcare AI, financial services AI, and education AI became enforceable in August 2026. Teams deploying AI in these categories for EU residents are in scope regardless of where the company is based.
What triggers high-risk classification (Annex III):
- AI used in hiring, promotion, termination, or task assignment decisions
- AI used to evaluate creditworthiness or establish credit scores
- AI used in clinical decision support or medical diagnosis
- AI used in access to essential services (housing, insurance, energy)
- AI used in education access and assessment
- Biometric identification systems
- Critical infrastructure AI
What high-risk classification requires:
- Conformity assessment — documentation that the system meets EU AI Act requirements before deployment
- Bias and accuracy testing — performance testing across demographic groups, documented
- Human oversight mechanism — affected individuals can request human review; that review must be meaningful
- Individual notice — individuals informed that AI was used and why
- EU AI database registration — high-risk systems registered in the public EU AI database before deployment
- Post-market monitoring — ongoing performance tracking after deployment
For US teams, the Colorado AI Act adds a state-level layer for the same high-risk categories with a June 30, 2026 initial compliance deadline. The Colorado AI Act compliance guide covers the required transparency statement, individual notice templates, and impact assessment process.
For a comparison of EU AI Act vs. NIST AI RMF requirements and how to satisfy both with a single documentation set, see the EU AI Act vs NIST AI RMF guide. For the complete EU AI Act compliance guide covering all risk tiers, prohibited AI, GPAI obligations, sector-specific requirements, and the full compliance checklist, see the EU AI Act compliance complete guide.
6. Sector-Specific Compliance
Three sectors have AI-specific compliance requirements that go beyond general governance obligations. If your team operates in healthcare, HR/hiring, or financial services, these apply on top of the baseline framework.
Healthcare: HIPAA + FDA SaMD + EU AI Act
Healthcare AI must satisfy three separate regulatory frameworks simultaneously:
-
HIPAA applies whenever an AI tool processes Protected Health Information (PHI). Every AI vendor that touches PHI must sign a HIPAA BAA before receiving any patient data. Consumer AI tools (Claude.ai, ChatGPT free/Plus) cannot be used with PHI under any circumstances — they do not offer BAAs.
-
FDA Software as a Medical Device (SaMD) rules apply when AI influences clinical decisions. The 21st Century Cures Act exempts clinical decision support software where the clinician can meaningfully review the reasoning — but AI that makes near-autonomous clinical recommendations without that transparency is regulated as SaMD, requiring 510(k) clearance or De Novo review.
-
EU AI Act high-risk classification applies to AI used in clinical diagnosis, treatment, or patient management for EU patients.
The healthcare AI governance guide covers all three frameworks, the BAA checklist for AI vendors, the FDA SaMD risk classification matrix, and the practical implementation checklist for small clinical practices and health tech startups.
HR: EU AI Act High-Risk + EEOC Disparate Impact
AI hiring tools are explicitly classified as high-risk under the EU AI Act (Annex III, Section 4). This is not a judgment call — it is a statutory classification. The compliance obligations apply to employers (deployers), not just to the AI vendors who build the tools.
Under EEOC guidance, employers are liable for discriminatory outcomes from AI hiring tools regardless of vendor claims. The standard test is the 4/5ths rule: if the AI selects a protected group at less than 80% the rate of the highest-selected group, disparate impact is presumed and the employer must justify the tool or stop using it.
Several US states have additional requirements: NYC Local Law 144 requires an independent third-party bias audit, Illinois requires disclosure before AI video interview analysis, and Colorado's SB 24-205 employment domain deadline was June 30, 2026.
The HR AI governance guide covers conformity assessment requirements for hiring AI, the EEOC 4/5ths testing methodology, candidate disclosure language templates, and the five vendor due diligence questions to ask before deploying any AI hiring tool.
Fintech: CFPB + FCRA + EU AI Act Credit Scoring
AI used in credit decisions triggers three compliance frameworks at once. The CFPB adverse action requirement under ECOA (Regulation B) is the most immediate compliance gap: when AI denies or limits a credit application, the applicant must receive specific, human-understandable reasons — "algorithmic decision" or "AI model output" are not valid reasons under CFPB guidance.
The FCRA applies when your AI model uses data from a consumer reporting agency. EU AI Act Annex III Section 5b explicitly lists AI credit scoring as high-risk.
The fintech AI governance guide covers ECOA-compliant adverse action reason codes, disparate impact testing for credit AI, the combined ECOA + FCRA adverse action notice template, and EU AI Act conformity requirements for AI credit scoring systems.
7. Roles and Responsibilities
For a small team, AI governance does not require a dedicated compliance officer. It requires defined ownership:
| Role | Responsibility | Minimum time per week |
|---|---|---|
| AI Governance Lead | Owns the AI use-case inventory, vendor review process, policy updates, and compliance monitoring | 2–4 hours |
| AI System Owner | Responsible for each specific AI deployment — monitors performance, documents incidents, manages vendor relationship | 1–2 hours per tool |
| Team members | Report suspected AI errors or data incidents; follow the acceptable use policy | Minimal (policy literacy only) |
In most small teams, the AI Governance Lead is the CTO, Head of Engineering, or a senior operations manager who already has compliance adjacent responsibilities. What matters is that the role is explicitly named — governance without a named owner defaults to no governance.
The AI governance roles and responsibilities guide provides the full RACI matrix for AI governance activities, the AI Governance Lead job description, and how to distribute responsibilities across a 5-person team versus a 50-person team.
8. Shadow AI — The Hidden Risk
Shadow AI is the use of AI tools outside the governance process: employees using personal accounts on consumer AI products with company or customer data, AI features embedded in SaaS products that activate without announcement, and AI integrations built by individual team members without security review.
Why shadow AI is the highest-probability compliance risk:
- It is invisible to the governance process by definition
- Consumer AI tools rarely offer DPAs, training opt-outs, or BAAs
- A single employee pasting customer data into an unapproved AI tool creates a potential GDPR or HIPAA violation
- Shadow AI grows when approved tools are too slow, restricted, or inaccessible — governance that creates friction accelerates it
How to detect and reduce shadow AI:
- Audit network traffic — DNS queries to known AI APIs that are not in your approved list
- Browser extension audit — many AI writing assistants are installed as browser extensions without IT visibility
- SaaS AI feature audit — review settings in every SaaS product for AI features (Notion AI, Slack AI, HubSpot AI, etc.) that may have been enabled by default
- Reduce the friction on approved tools — if the approved AI tool is harder to use than the shadow alternative, shadow AI will win
The shadow AI governance guide covers the detection methods, the SaaS AI feature audit process, and how to design an approved-tools list that actually reduces shadow adoption.
9. Data Privacy — GDPR, CCPA, and Training Opt-Out
AI tools create two categories of data privacy obligation:
1. Data processing under GDPR/CCPA: When you send personal data to an AI tool, that tool becomes a data processor. GDPR requires a Data Processing Agreement. CCPA requires that the vendor not sell or share the data. Confirm DPA status before using any AI tool with customer or employee personal data.
2. Training data opt-out: Many AI tools use your prompts to train or improve their models by default. This means your proprietary business data, client information, or internal communications may become training data for a model used by your competitors. Enterprise plans typically allow you to opt out of training; free and low-tier plans typically do not.
For detailed guidance on which AI APIs and platforms train on your data by default, and which offer genuine privacy-protective terms, see the AI data privacy for small teams guide.
For a head-to-head comparison of AI API providers specifically on privacy commitments — who offers no-training-by-default, who stores prompts, who offers EU data residency — see the privacy-first AI API guide.
10. Developer AI Tools — Code Assistants and Supply Chain
AI code assistants (GitHub Copilot, Cursor, Codeium, Tabnine) create governance risks that engineering teams often underestimate:
Source code exposure: AI code assistants work by sending code context — your variable names, architecture patterns, internal comments — to AI vendor servers for inference. On personal/free plans, this code may be used to improve vendor models. For proprietary codebases, require Business or Enterprise tier accounts.
IP and licensing risk: Code assistants trained on public repositories can reproduce GPL-licensed code in suggestions. Enabling public code duplication detection in GitHub Copilot org settings blocks verbatim and near-verbatim matches.
SOC 2 and regulated system compliance: If a developer uses a code assistant in a codebase that touches SOC 2 scope, HIPAA-regulated data, or PCI CDE, the code assistant vendor becomes part of your compliance supply chain. Add it to your vendor register.
The GitHub Copilot and AI code assistant governance guide covers the org-level settings to configure, the acceptable use policy language for engineering teams, and the supply chain implications for regulated codebases.
For SOC 2 specifically, the AI tools and SOC 2 compliance guide maps which AI tool categories affect which SOC 2 trust service criteria and what evidence auditors will request.
11. Red Teaming and Incident Response
Pre-deployment adversarial testing — red teaming — is the process of deliberately probing an AI system for failure modes before it goes into production. For high-risk AI deployments, red teaming is not optional under EU AI Act Article 9 (risk management system).
What red teaming covers for business AI:
- Prompt injection attacks (malicious inputs that override system instructions)
- Data extraction attempts (prompts designed to recover training data or system prompts)
- Bias and fairness probing (adversarial inputs targeting demographic edge cases)
- Output reliability under edge cases (unusual inputs that cause confident incorrect outputs)
The red teaming AI systems governance guide provides a structured red team methodology for small teams, specific attack patterns to test for common AI use cases (customer support, document processing, code generation), and a documentation template for EU AI Act risk management compliance.
When a vendor incident occurs — a breach, an unexpected model behavior change, a supply chain compromise — having a response plan in place before the incident reduces exposure time. The AI vendor security incident response guide provides a 72-hour response checklist and the notification chain for vendor-side incidents affecting your deployments.
12. Monitoring and Ongoing Compliance
AI governance is not a one-time implementation. AI models drift, vendor terms change, regulatory requirements evolve, and team AI usage patterns shift. Ongoing monitoring requires:
Monthly review cadence:
- Review any AI-generated decisions that were escalated or reversed
- Check vendor changelog for model updates or policy changes
- Confirm no new shadow AI tools have appeared in use
- Update the AI tool register with any new deployments
Quarterly:
- Re-run risk assessment for high-risk AI deployments
- Check for regulatory updates (EU AI Act enforcement guidance, CFPB circulars, EEOC guidance)
- Review vendor DPA and BAA status for renewals or changes
Annual:
- Disparate impact testing for any AI used in employment or credit decisions
- Full vendor due diligence refresh for top-5 AI tools
- Policy review and update
The monthly AI governance review checklist provides the specific agenda, 15-minute structure, and decision log template for the monthly review meeting.
The AI monitoring tools guide covers the lightweight tooling options for small teams to track AI output quality, detect performance drift, and surface anomalies without building a full MLOps infrastructure.
Complete AI Governance Checklist
Immediate (Week 1)
- Complete AI use-case inventory — list every AI tool in use by any team member
- Identify data classification: which tools touch PII, PHI, customer data, or financial data
- Draft AI acceptable use policy using the template
- Assign an AI Governance Lead
- Identify top 3–5 highest-risk AI deployments
Within 30 Days
- Run vendor due diligence on all AI tools handling sensitive data
- Confirm DPA status for GDPR-covered vendors
- Confirm BAA status for any AI vendor touching PHI (healthcare teams)
- Verify training opt-out settings for all AI tools with access to proprietary or customer data
- Run AI risk assessment for highest-risk deployments
- Set up monthly governance review cadence
If Deploying High-Risk AI (EU AI Act Scope)
- Classify each AI tool against EU AI Act Annex III high-risk categories
- Obtain vendor EU Declaration of Conformity for any high-risk AI tools
- Implement human oversight mechanism (ability to override AI decisions)
- Add individual notice language to any AI decision-making workflows
- Document bias testing methodology and results
- Register high-risk AI systems in EU AI database before EU deployment
Sector-Specific
Healthcare:
- Sign HIPAA BAA with every AI vendor that processes PHI
- Classify clinical AI tools against FDA SaMD criteria
- Remove PHI from AI prompts or use minimum necessary standard
HR/Hiring:
- Run 4/5ths disparate impact test on applicant pool
- Add candidate disclosure language to job postings and application forms
- Obtain vendor bias audit methodology and results
Fintech:
- Verify AI credit models produce ECOA-compliant specific reason codes
- Confirm adverse action notices include all ECOA + FCRA elements
- Test for disparate impact across ECOA protected classes
Ongoing
- Monthly: Review escalated AI decisions, check vendor changelogs, confirm no new shadow AI
- Quarterly: Re-run risk assessment, check regulatory updates, verify DPA/BAA renewals
- Annually: Disparate impact testing, full vendor refresh, policy review
Where to Start
If you're starting from zero, the order of operations is:
- AI use-case inventory and framework — understand what you have before building governance around it
- AI acceptable use policy template — ship a policy baseline this week
- AI risk assessment — score your existing deployments and identify which need deeper controls
- AI vendor due diligence checklist — run a 30-minute review before deploying any new tool
- Sector-specific guide for your industry if applicable
The AI governance checklist consolidates all the above into a single copy-paste checklist formatted for Notion or Linear.
References
- EU AI Act — Annex III: High-risk AI system categories
- NIST AI Risk Management Framework (AI RMF 1.0) — risk tiering and controls
- CFPB Circular 2023-03 — Adverse action notification requirements
- EEOC Technical Assistance: AI in employment selection procedures
- HHS HIPAA Business Associate guidance
- Related: How to Build an AI Governance Framework — the 5-component structure and implementation sequence
- Related: AI Risk Assessment for Small Teams — risk scoring matrix and risk register template
- Related: AI Vendor Due Diligence Checklist — 30-question vendor review framework
- Related: Healthcare AI Governance — HIPAA, FDA SaMD, and EU AI Act for clinical teams
- Related: HR AI Governance — EU AI Act Annex III employment, EEOC 4/5ths testing
- Related: Fintech AI Governance — CFPB adverse action requirements, FCRA, EU AI Act credit scoring
