A small-team AI policy baseline requires five documents: a use-case inventory, an acceptable use policy, a vendor checklist, an incident response note, and a monthly review cadence. You can ship all five in under a week without a compliance team. This guide gives you the sequence and the templates.
At a glance: Build in order — inventory first (so the policy covers your actual tools, not hypothetical ones), acceptable use policy second (one page, specific to your data categories), vendor checklist third (pass/fail before every new tool), incident response note fourth, monthly review slot fifth. Skip the inventory and your policy will miss the highest-risk tools. The whole baseline takes under a week for a team of 5–50.
Who this is for
- 5–50 people shipping software, services, or operations work with LLM assistants
- No dedicated compliance hire, but real customer data and real reputational risk
- Leaders who want audit-friendly evidence without enterprise program theatre
The kit (what you will ship)
- AI use-case inventory — a living list of tools, owners, and data classes touched
- Acceptable use policy — one page your team can actually skim
- Vendor pass/fail checklist — used before every new subscription
- Incident response note — who to page and what to freeze when something breaks
- Monthly review slot — fifteen minutes, same calendar invite, no exceptions
If you only do three items this month, do inventory, acceptable use, and monthly review.
Order of operations (do not skip steps)
Step 1 — Run a fourteen-day inventory sprint
Shadow AI appears when people optimise for speed. Your job is not to ban tools; it is to make usage legible.
- Post a short internal form: tool name, use case, data types, paying customer or not
- Merge duplicates and assign a business owner per tool
- Flag anything touching health, credit, HR, or children for a follow-up risk pass
Use the AI usage audit workflow when you are ready to formalise the rhythm.
What a healthy inventory looks like after two weeks:
- 8–25 distinct tools (most small teams are surprised how many exist)
- Each tool has one named owner — not a team
- At least one tool flagged for a vendor review because it touches sensitive data
- No tool listed as "owner: everyone" (that means no one owns it)
Step 2 — Draft the policy around data — not hype
Policies fail when they read like marketing copy. Anchor yours in data classes and decision rights instead.
Cover, in plain language:
- Approved vs trial tools and who can spin up a trial
- Never-paste rules (customer PII, trade secrets, regulated datasets, credentials)
- Human review thresholds — especially for customer-facing output
- Retention: may conversation logs be stored? where?
- Reporting: what counts as an incident and who to tell
Start from the AI acceptable use policy template and tailor names, tools, and geography in under an hour.
Common policy writing mistakes:
- Too long: if the policy exceeds two pages, most employees will not read it. Write the full version, then create a one-page summary.
- Too vague: "exercise caution" is not enforceable. "Customer PII may not be pasted into AI tools without a signed DPA" is.
- No effective date or version: auditors need to see when the policy was last reviewed. Add a version line and a review date.
- No named policy owner: who approves exceptions? who updates the policy when a new regulation drops? Name them.
Step 3 — Vendor due diligence before you standardise a tool
Once a tool wins internal adoption, it becomes expensive to rip out. Run a 30-minute diligence pass before you declare it approved stack.
The goal is not a perfect security review — it is documenting that you asked the obvious questions: data processing terms, training opt-out, subprocessors, and exit. Use the vendor evaluation checklist verbatim, then store the completed file next to the subscription invoice.
Four questions every vendor evaluation must answer:
- Does this vendor train on our data by default, and is there an opt-out?
- Where is data processed and stored? Does this create GDPR or data residency issues?
- Can we get a signed DPA, and does it name all subprocessors?
- How do we export our data if we stop using the tool?
If a vendor cannot answer these, treat them as unapproved until they can.
Step 4 — Publish a one-page incident note
Incidents are when ambiguous policies become lawsuits or front-page stories. You need a single paragraph chain of command plus links to your security broker and counsel.
If you do not have a bespoke playbook yet, clone the AI incident response playbook and swap in names. The minimum an incident note needs to cover:
- What constitutes an incident (data exposure, bad output shipped, policy violation)
- Who to contact first (named individual, not a team inbox)
- What to freeze immediately while assessing (tool access, data exports)
- When to notify customers or regulators (GDPR 72-hour window, etc.)
Test the note within 30 days with a tabletop exercise: walk one hypothetical scenario (e.g. "a contractor pasted customer emails into a free AI tool") through the entire chain. Find the gaps before an incident does.
Step 5 — Calendar the operating rhythm before you declare victory
Governance decays without a heartbeat. Minimum viable cadence:
- Weekly (async): new tool proposals land in a dedicated channel with checklist status
- Monthly (15 minutes): owner reviews inventory deltas + incidents/near misses
- Quarterly (60 minutes): rerun the governance checklist and refresh templates
The AI governance checklist (2026) is the agenda for the quarterly session.
Making the rollout stick
Publishing the policy is the start, not the finish. The most effective small-team rollout sequences:
Communication: Send a 3-paragraph email from the CEO or founder explaining why the policy exists (not what it contains). Attach the policy. Schedule a 15-minute Q&A within a week.
Acknowledgment: Have every employee confirm they have read the policy — a Slack reaction, a form response, or a checkbox in your HR tool. This creates audit evidence and raises the probability that people actually skim it.
Onboarding integration: Add the policy acknowledgment to the new hire checklist. Every person who joins after you publish should encounter it in their first week.
Exception handling: Create a fast path for low-risk experiments (Slack message to the policy owner, decision in 24 hours). Slow exception processes get bypassed.
Industry-specific additions
Most teams can ship the baseline kit without customisation. If you operate in a regulated sector, add these layers:
Healthcare / Life Sciences:
- BAA (Business Associate Agreement) required for any AI tool that processes PHI
- AI outputs used in clinical decisions require explicit human review documentation
- Check HIPAA Safe Harbour requirements for de-identification if using patient data for training
Financial services:
- Model risk management requirements may apply to AI used in credit, fraud, or investment decisions
- Document model version, training data provenance, and validation approach for each model in scope
- FINRA and SEC guidance requires records retention for AI-generated customer communications
Legal / Professional services:
- Client privilege obligations restrict what can be shared with external AI systems
- Bar association guidance varies by jurisdiction — check before using AI for client work
- Document retention policies must extend to AI conversation logs for privileged matters
Policy versioning and maintenance
A policy that is never updated creates false confidence. Build a version control habit:
- Version number (1.0, 1.1, 2.0) — increment minor for editorial changes, major for structural additions
- Review date — who reviewed it and when
- Change log — one-line summary of what changed from the previous version
- Archive — keep previous versions accessible (you may need to demonstrate what policy was in place at the time of an incident)
When a major regulation changes (EU AI Act obligations phase in, GDPR enforcement action creates new precedent), schedule an immediate unplanned review. Do not wait for the quarterly cycle.
How this connects to regulation
Teams operating globally should assume they will need evidence of proportionate controls, not perfection. If you are mapping EU AI Act obligations, pair this starter kit with how to build a governance framework and the EU-focused posts in the Governance category — then escalate edge cases.
The five-artefact kit satisfies the baseline evidence requirements for:
- GDPR — demonstrates you have a data processing policy, vendor DPAs, and incident response procedures
- EU AI Act (deployer obligations) — demonstrates proportionate risk assessment and human oversight procedures
- NIST AI RMF — satisfies the Govern function's core requirements for policy and accountability
Next actions
- Schedule the inventory sprint owner and due date today
- Fork the acceptable-use template and circulate a marked "draft" for 48 hours of comments
- Subscribe to the newsletter if you want the monthly checklist refresh — we ship one actionable asset per issue
When you outgrow spreadsheets, re-read AI monitoring tools for small teams before you buy observability you will not staff.
Policy anti-patterns that signal compliance theatre
Governance programs that generate paperwork without reducing risk are called compliance theatre. The most common signs in AI policy programs:
The policy is longer than two pages for a team under 50. Long policies signal that the drafters were covering hypothetical scenarios rather than known risks. Every clause should be traceable to a specific tool, data class, or incident scenario.
The policy was last reviewed before the team adopted its most-used AI tool. A policy that doesn't mention ChatGPT, GitHub Copilot, or whatever your team actually uses is a signal that governance hasn't kept pace with adoption.
Exceptions are never granted. A policy with no documented exceptions is one that is either not being followed (people are making informal decisions that aren't recorded) or one that is so restrictive that it has driven usage underground.
The policy owner doesn't know who to call for an incident. This test takes 30 seconds: ask the policy owner to name the first two people they would contact if a data exposure occurred involving an AI tool. If they hesitate, the incident response chain is not practiced.
No one can explain the "why" behind a rule. Employees who cannot connect a policy rule to a reason are more likely to ignore it when it is inconvenient. Every rule in the policy should have an implicit or explicit explanation of the risk it prevents.
Using the policy for enterprise sales
AI governance documentation increasingly appears in enterprise security questionnaires. The five-artefact kit directly answers the most common questions:
| Customer questionnaire question | Which artefact answers it |
|---|---|
| Do you have an AI usage policy? | Acceptable use policy |
| What AI vendors do you use and how were they evaluated? | Vendor evaluation records |
| What data do your AI tools process? | Use-case inventory (data class column) |
| How do you handle AI incidents? | Incident response note |
| When was your policy last reviewed? | Policy version and review date |
Having a dated, version-controlled policy document that can be shared in under 10 minutes is a tangible differentiator in deals where the competitor says "we're working on it."
Scaling the kit as the team grows
The five-artefact baseline works for teams up to about 50 people. At larger scales, specific components need to grow:
50–150 people: The acceptable use policy splits into a short employee-facing policy and a longer technical appendix. The vendor evaluation process becomes more formal with multiple reviewers. The incident response note becomes a full playbook with named backups for each role.
150+ people: A dedicated governance function (even a part-time hire) becomes necessary. The use-case inventory migrates from a spreadsheet to a system of record. The quarterly review becomes a standing committee.
For teams under 50, resist the temptation to build for scale you haven't reached. The most common governance failure at the small-team stage is over-engineering the system and then not maintaining it.
What to do when a policy is being ignored
A policy that exists on paper but is not followed is worse than no policy — it creates the illusion of governance without the protection. Signs your AI policy is being ignored:
- New tools appear in the inventory without a request or approval step
- The incident log has been empty for 90 days despite active AI usage
- Employees do not know where the policy is stored or who owns it
- The last acknowledged review was over a year ago
Root causes and responses:
The policy is too long or too technical. Create a one-page summary with the five most important rules, formatted for quick reference. Pin it in Slack and link it from the onboarding checklist.
The approval process is too slow. If getting tool approval takes more than 48 hours, people will use tools before approval. Create a fast-path tier for low-risk tools with a 24-hour turnaround.
Nobody believes there are consequences. If a policy violation has never been documented or addressed, employees learn that the policy is not enforced. When you find a violation, document it and address it proportionately — even if the response is just a 5-minute conversation and a note in the incident log.
The policy owner has changed without an explicit handover. Check: who does your team think owns the AI policy right now? If there is uncertainty or disagreement, run an explicit ownership handover and communicate it.
The best time to address policy non-compliance is before an incident forces the issue. A policy that is occasionally discussed and updated feels like a living document; one that is ignored until something goes wrong feels like a compliance exercise.
Governance for AI-generated content specifically
AI-generated content — blog posts, emails, proposals, reports — creates a distinct set of governance concerns that many teams initially miss:
Attribution and disclosure: Some contracts, regulations, and platform terms require disclosure of AI-generated or AI-assisted content. The policy should specify when disclosure is required and what form it takes.
Accuracy liability: Unlike human-authored content, AI-generated content may contain confident-sounding errors. The policy should require human review before any AI-drafted content is published or sent to external parties.
Training data and copyright: Content generated by AI tools may inadvertently reproduce copyrighted material. The policy should note that AI-generated content requires human review for obvious reproduction before publication.
Brand and voice consistency: AI tools may produce content that does not match your brand standards. This is a quality issue, not a compliance issue, but it belongs in the acceptable use policy alongside the compliance requirements.
Key Takeaways
- Build five artefacts in strict order: inventory → policy → vendor checklist → incident note → review cadence
- The policy must name specific tools, data categories, and a single owner — vague principles are not enforceable
- Run a 30-minute vendor evaluation before standardising any tool; store the completed checklist
- Test your incident note with a tabletop exercise within 30 days of publishing
- Add the policy acknowledgment to your new-hire onboarding checklist — governance only works if people encounter it
References
- National Institute of Standards and Technology — AI Risk Management Framework (AI RMF 1.0)
- European Parliament and Council — EU AI Act
- OECD — OECD AI Principles
- ICO — Accountability and governance for AI
- NIST — Cybersecurity Framework (adapted for AI policy structure)
