Most small teams set up an AI policy once and revisit it only when something goes wrong. The problem is that AI governance has a short shelf life: vendors change their terms, employees adopt new tools, and regulations evolve. A policy written six months ago may no longer reflect what your team actually does.
A monthly 30-minute review fixes this without requiring a compliance department. Here is a repeatable process that keeps your AI governance current and audit-ready.
Why Monthly (Not Annual)
Annual policy reviews made sense when the software your team used changed slowly. AI tool adoption moves faster. In 2026:
- Major AI vendors update their terms of service multiple times per year
- New AI tools spread through teams via free trials, browser extensions, and word of mouth
- Regulations like the EU AI Act are adding new obligations on a rolling compliance calendar
- AI vendor security incidents happen frequently enough that quarterly vendor checks are not sufficient
The EU AI Act specifically requires providers of high-risk AI systems to implement post-market monitoring as a continuous process. For lower-risk internal use, monthly checks provide a reasonable approximation of ongoing oversight.
Monthly reviews also create an audit trail. A log of 12 monthly reviews is more convincing evidence of active governance than one annual review document.
The 5-Step Monthly Process
Set a recurring calendar event for 30 minutes on the same day each month — the first Monday, the last Friday, whatever is consistent. The AI lead runs all five steps.
Step 1: Shadow AI Scan (10 minutes)
Post a 2-3 question message in your team Slack or Notion:
"Monthly AI check: Is anyone using an AI tool for work that isn't on our approved list? Any new AI features in existing tools you've noticed? Reply here or DM me."
Also check:
- SSO logs for new OAuth connections to AI services (ChatGPT, Claude, Perplexity, etc.)
- Browser extension policies if you manage device MDM — AI extensions are common shadow AI vectors
- Any new SaaS subscriptions created this month (ask finance or check your expense tool)
Any new tool identified goes on the list for risk assessment before next month's review.
Step 2: Incident Log Review (5 minutes)
Check your incident log or help desk for anything AI-related since last month:
- Did any employee accidentally submit restricted data to an AI tool?
- Did any AI-generated output get used without human review?
- Did any client raise a concern about AI use in your work product?
- Did any AI vendor notify you of a security incident?
If any of these occurred, document them in your AI incident register and determine whether your policy needs to be updated to prevent recurrence.
Step 3: Vendor Changelog Check (5 minutes)
For each Tier 2 and Tier 3 AI tool on your approved list, check:
- Did the vendor update their terms of service, privacy policy, or subprocessor list this month?
- Did the vendor announce any new features that change what data is processed?
- Did the vendor publish a security incident report?
Most AI vendors maintain a changelog or status page. OpenAI, Anthropic, Google, and Microsoft all publish subprocessor list updates and policy changes on their legal pages. Bookmark these and spend 60 seconds per vendor. If something changed materially — new subprocessors, changed data retention defaults — update your vendor risk record.
Step 4: Approved Tools List Update (5 minutes)
Update your AI tools register:
- Add any newly approved tools (with risk tier, DPA status, date approved)
- Remove any tools that are no longer in active use
- Update DPA status for any tools where you recently signed or renewed a DPA
- Flag any tools pending security review
The approved tools list is living documentation. It should reflect what your team is actually using today, not what you thought they were using six months ago.
Step 5: Policy Version Review (5 minutes)
Read the first page of your AI acceptable use policy. Ask:
- Does it still accurately describe what tools your team uses?
- Are the prohibited data categories still correct?
- Has anything in your business changed that affects AI risk (new product area, new customer segment, new regulation)?
If a material change is needed, schedule a policy update. If only minor updates are needed (adding a newly approved tool to the approved list), update the policy now and increment the version number and effective date.
The Copy-Paste Checklist
Use this table in Notion, Linear, or a shared document. Check each item monthly.
| Step | Check | Owner | Status |
|---|---|---|---|
| Shadow AI scan | Posted Slack message, reviewed SSO logs, checked expense tool | AI Lead | |
| Incident review | Reviewed incident log for AI-related events | AI Lead | |
| Vendor changelogs | Checked terms/subprocessors for: [list your tools] | AI Lead | |
| Tools list updated | Register reflects current tool set, DPA status current | AI Lead | |
| Policy reviewed | AUP version current, effective date updated if changed | AI Lead | |
| Next review date |
Record the outcome of each review in a running log — date, who ran it, any findings, and any changes made. This log is your evidence of ongoing governance.
When to Do an Out-of-Cycle Review
Some events warrant an immediate review outside the monthly schedule:
- AI vendor breach: If a vendor you use announces a security incident affecting customer data
- Major regulation change: A new law or enforcement action that affects your industry's use of AI
- New high-risk AI use case: Your team starts using AI for hiring, healthcare, financial decisions, or customer-facing decisions at scale
- Suspected misuse: An employee reports that AI was used in a way that may violate your policy
The monthly process maintains your baseline. Out-of-cycle reviews handle exceptions.
Want to start with a proper baseline before running your first review? Use the AI Risk Assessment to rate your current AI use cases, then use the Policy Generator to create or update your AI acceptable use policy with the right clauses for your context.
