AI governance is not one policy document. It is six interconnected areas, each with specific controls that either exist or don't. This checklist covers all six. It is designed to be completed in a half-day sprint by one person — no compliance department required.
Use it to establish baseline governance, then run the review sections monthly.
How to Use This Checklist
- Initial setup (one-time): Work through Sections 1-5 in order. Each section takes 20-60 minutes. Total: half a day.
- Ongoing (monthly): Run Section 6 (Review Process) as a 30-minute monthly check.
- Format: Copy this into Notion, Linear, or a shared doc. Check each item. Record the date and owner.
Designate an AI lead before you start. This is the person who owns the checklist, runs the monthly reviews, and is the escalation point for AI-related issues. It does not need to be a dedicated role — a senior engineer, operations manager, or team lead is sufficient.
Section 1: AI Policy Documentation
AI Acceptable Use Policy
| Item | Done | Owner | Date |
|---|---|---|---|
| Written AI AUP exists (not just a verbal policy) | |||
| AUP specifies which AI tools are approved | |||
| AUP lists prohibited use cases (e.g., no PHI in ChatGPT) | |||
| AUP defines prohibited data categories per tool tier | |||
| AUP covers output review requirements | |||
| AUP specifies incident reporting process | |||
| AUP has been reviewed by legal or a qualified advisor | |||
| AUP version number and effective date are recorded | |||
| All employees have acknowledged the AUP (signature or click-through) |
AI Tools Inventory
| Item | Done | Owner | Date |
|---|---|---|---|
| Approved AI tools list exists and is current | |||
| Each tool has a risk tier assigned (Low / Medium / High) | |||
| Each tool has a designated owner | |||
| The list was last reviewed within 90 days |
Section 2: Vendor Risk Management
For each AI tool on your approved list, complete this assessment. High-risk tools (those receiving personal data or regulated data) require all items. Low-risk tools require only the first two.
Per-Tool Vendor Assessment
| Item | Done | Notes |
|---|---|---|
| Tool identified as Tier 1 / Tier 2 / Tier 3 (Low / Medium / High) | ||
| Vendor's SOC 2 Type II report reviewed (or noted as unavailable) | ||
| Data Processing Agreement (DPA) signed — required for Tier 2/3 | ||
| Subprocessor list reviewed and documented | ||
| Data retention and deletion terms confirmed | ||
| Training opt-out status confirmed (does vendor train on your data?) | ||
| Vendor changelog or status page bookmarked for monthly monitoring |
Vendor Risk Register
| Item | Done | Owner | Date |
|---|---|---|---|
| Vendor risk register exists (spreadsheet, Notion, or GRC tool) | |||
| All Tier 2 and Tier 3 tools have a vendor record | |||
| DPA expiry dates are tracked | |||
| Annual vendor review is scheduled |
Section 3: Access and Data Controls
Who Can Use What
| Item | Done | Owner | Date |
|---|---|---|---|
| Approved tools are provisioned via SSO where available | |||
| Access to Tier 3 AI tools is restricted to approved roles | |||
| Offboarding process removes AI tool access | |||
| Shared login credentials for AI tools have been eliminated |
What Data Goes Where
| Item | Done | Owner | Date |
|---|---|---|---|
| Data classification scheme exists (e.g., Public / Internal / Confidential / Restricted) | |||
| Each classification has a clear rule for AI tool use (e.g., Confidential data: no AI tools without DPA) | |||
| Customer personal data AI handling is documented | |||
| PHI handling rules are documented if team operates in healthcare | |||
| Source code AI tool rules are documented if team uses AI coding tools |
Section 4: Incident Response
AI-Specific Incident Scenarios
| Item | Done | Owner | Date |
|---|---|---|---|
| AI incident definition exists (what counts as an AI-related incident) | |||
| Incident register or log is in place | |||
| Process for PHI/PII accidental submission to AI tool is documented | |||
| Process for AI vendor security breach notification is documented | |||
| Process for AI output causing harm to a customer is documented | |||
| GDPR/CCPA breach notification process covers AI-triggered incidents |
Section 5: Employee Training
| Item | Done | Owner | Date |
|---|---|---|---|
| AI governance training exists for new hires (onboarding) | |||
| Training covers: approved tools, prohibited data, how to report incidents | |||
| Training completion is tracked and recorded | |||
| Annual refresher training is scheduled | |||
| Training content is updated when AUP is updated |
Section 6: Ongoing Review Process (Monthly)
Run this section once per month. Takes 30 minutes. The AI lead is the owner.
| Step | Action | Done | Date |
|---|---|---|---|
| Shadow AI scan | Post 2-3 questions in Slack or team chat: any new AI tools? New AI features in existing tools? | ||
| Shadow AI scan | Review SSO logs for new OAuth connections to AI services | ||
| Shadow AI scan | Check expense tool for new SaaS subscriptions | ||
| Incident review | Review incident log for any AI-related events since last check | ||
| Vendor changelogs | Check terms/privacy/subprocessors for each Tier 2-3 tool | ||
| Tools list | Update approved tools list to reflect current usage | ||
| Policy review | Read first page of AUP: still accurate? Any material changes needed? | ||
| Record | Log: date, who ran review, findings, changes made | ||
| Schedule next review | Set next review date |
Triggers for an out-of-cycle review: AI vendor security incident, new regulation taking effect, team adopting AI for a new high-risk use case (hiring, healthcare, finance), or suspected policy violation.
Baseline vs. Full Governance
Not every team needs every item from day one. Here is a sensible prioritization:
Minimum baseline (do this first):
- Written AI AUP with approved tools list
- DPA signed with any tool that processes personal data
- Incident reporting process documented
- All employees trained on the AUP
Full governance (add over 3-6 months):
- Complete vendor risk register
- Data classification + per-tier AI rules
- Access controls via SSO
- Monthly review cadence running
Advanced (for regulated industries or enterprise sales):
- Formal audit trail of monthly reviews
- Third-party AI vendor assessments
- Integration into SOC 2 or ISO 27001 program
Ready to start with the policy layer? Use the Policy Generator to create an AI acceptable use policy tailored to your team's context. Use the AI Risk Assessment to assign risk tiers to your current AI use cases before completing Section 2.
