AI governance fails when it lives in a PDF nobody opens. The fix is an operating rhythm: predictable rituals with crisp inputs, outputs, and owners. This is the model we recommend when your "committee" is three busy people and a shared Notion.
At a glance: Four nested loops — weekly async (10 min), monthly check-in (15 min), quarterly reset (60 min), annual review (half-day). Each loop has a fixed agenda, named outputs, and a single owner. The whole program is designed to run in under six hours of dedicated time per quarter per person. No GRC software required.
Roles (keep it tight)
| Role | Responsibility |
|---|---|
| Policy owner | Runs cadence, signs off exceptions, maintains policy versions |
| Tool sponsor | Business outcome and budget accountability for each approved AI workflow |
| Security delegate | Reviews data classes, access controls, and logging completeness |
| Legal point (fractional OK) | High-risk decisions, regulatory interpretation, DPA reviews |
If you cannot name all four titles, start with policy owner + tool sponsors and pull others in as needed. In a 10-person team, the policy owner and security delegate may be the same person.
The four loops
Weekly ritual — "Invisible work becomes visible"
Duration: 10 minutes async + 5 minutes live if needed
Inputs: New AI tool proposals from any team member
Agenda:
- Tool sponsors post new experiments in a dedicated channel using a fixed template: data class, customer impact, rollback plan
- Policy owner merges duplicates in the inventory
- Security delegate flags anything mentioning regulated data or new API access
Outputs: Approved / rejected / hold status on each proposal; updated inventory if any tool moves status
Use this to prevent shadow AI from ossifying before you notice. The channel template creates a paper trail without adding meeting overhead.
Fast-path approvals: Most low-risk, no-PII tool trials can be approved asynchronously in under 24 hours. Reserve the 5-minute live check for tools that require the full group.
Monthly ritual — "Reality check"
Duration: 15 minutes on calendar — no exceptions, no rescheduling
Inputs: Weekly channel log, incident queue, vendor change notifications
Agenda:
- Inventory delta — new tools added, retired tools removed, ownership changes
- Incident and near-miss log — even "we almost pasted the wrong file" counts
- Vendor drift — any silently enabled features, new sub-processors, or pricing model changes?
- Policy tweaks — if nothing changed, note "no change" for audit traceability
Outputs: Updated inventory with date, incident log entry, any policy change noted
This is the same information your board will ask for later — capture it cheaply now. A 15-minute monthly meeting with consistent notes is worth more to an auditor than a 100-page policy document reviewed once a year.
When to expand the monthly slot: If two consecutive monthly reviews consistently run over 15 minutes, your governance scope has grown beyond what the weekly channel can absorb. Add a second monthly slot rather than letting the primary meeting drift.
Quarterly ritual — "Reset the compass"
Duration: 60 minutes, blocked well in advance
Inputs: Monthly notes from the last three months, risk register, regulatory change log
Agenda:
- Walk the AI governance checklist top to bottom
- Refresh risk scores using the AI risk assessment guide
- Decide which experiments graduate to approved workflows vs parking lot
- Update training snippets and FAQ for new hires
- Review any open items from the previous quarterly — close or re-deadline
Outputs: Updated risk register, policy version increment (or explicit "no change"), completed governance checklist, list of graduates and parked experiments
Document decisions in a single changelog entry: date, attendees, what moved status.
Quarterly checklist items that are easy to miss:
- Confirm all tool vendor DPAs are still current (vendors sometimes update terms unilaterally)
- Check that the incident response chain of command is still accurate (people change roles)
- Review whether any new tools have released features that change their risk profile
- Confirm training materials reflect any policy changes made in the last quarter
Annual ritual — "Full compass calibration"
Duration: Half-day (can be split into two 2-hour sessions)
Inputs: All quarterly notes, the previous year's annual review, regulatory change summaries, any available GSC/usage data
Agenda:
- Full AI usage audit — interview leads, cross-check technical signals
- Re-map your tool inventory against regulatory changes (EU AI Act phased timelines, new GDPR enforcement guidance, sector-specific updates)
- Update all policy documents, templates, and training materials
- Set next year's governance priorities — what capabilities, controls, or tooling will you add?
- If relevant: prepare an annual governance summary for the board or investors
Outputs: Full updated inventory, refreshed policy documents with new version numbers, regulatory compliance gap list, next-year priorities
The annual review is also the right time to evaluate whether your monitoring tooling still fits your scale — see AI monitoring tools for small teams.
Governance decision types
Not every decision needs the same process. Define three tiers to avoid both over- and under-governance:
Tier 1 — Fast-path (24-hour async) Criteria: no customer PII, no regulated data, uses an already-approved model API, reversible within a day. Process: post in channel, policy owner approves async, added to inventory.
Tier 2 — Standard (next monthly meeting) Criteria: touches confidential internal data, involves a new vendor with unknown DPA status, involves more than 3 users. Process: tool sponsor submits at least a week before the monthly; reviewed in the meeting.
Tier 3 — Escalated (requires legal or specialist input) Criteria: customer PII in production, regulated data (health, finance, children), automated decisions with legal effect, new country with unfamiliar data law. Process: policy owner briefs legal point before the meeting; decision documented with legal input noted.
Artefacts you should be able to export in ten minutes
- Latest policy PDF or doc with version number and review date
- Inventory spreadsheet with owners and last-updated date
- Vendor checklist archive — one file per vendor, stored with the subscription invoice
- Incident log — even if most entries are benign, the log shows that you are tracking
When fundraising, closing an enterprise sale, or responding to a regulator, those four files answer 90% of the questions. The team that can produce them in under 10 minutes signals operational maturity; the team that cannot produces friction at exactly the wrong moment.
Onboarding new people into the rhythm
Governance programs fail when they live only in the head of the founding owner. Two practices prevent this:
Document the "why" behind decisions. When you approve a tool, note why. When you decline one, note why. New joiners who understand the reasoning can apply it to new situations — those who only see the outputs cannot.
Include governance in onboarding. Every new hire should: acknowledge the AI policy, receive a one-pager on how to request new tools, and understand who the policy owner is. This should happen in week one, not month three.
Connecting the loops
- Weekly feeds the usage audit workflow with fresh signals between formal audits
- Monthly supplies the metrics and incidents that inform monitoring tooling decisions (comparison framework)
- Quarterly aligns policy to macro changes such as EU AI Act updates and connects back to the governance framework primer
- Annual produces the artefacts that satisfy investor, customer, and regulatory due diligence
Common rhythm failures
The quarterly meeting gets cancelled twice in a row. This is the most common way governance programs die. If a quarterly keeps getting deprioritised, it is a signal that the governance scope is seen as overhead, not value. Solution: attach governance review outputs to a business outcome (e.g. enterprise deal progress, regulatory clearance) so they have a visible cost when skipped.
The incident log is empty. Empty logs do not mean nothing happened — they mean nothing was recorded. Every near-miss, policy exception, and "that was close" moment should have a one-line entry. An empty log is a red flag to auditors; a log full of minor incidents that were handled correctly is evidence of a functioning program.
Ownership dilutes over time. "Everyone is responsible for AI governance" means no one is. When the original policy owner changes roles or leaves, governance often drifts for 2–3 quarters before anyone notices. Succession planning for the governance role should be explicit.
The rhythm runs but nothing changes. If every quarterly review produces no updates to the risk register, policy, or inventory, the reviews are not real — they are box-checking. A good review should surface at least one change, even if small.
Governance metrics worth tracking
Data makes governance visible to leadership and enables continuous improvement. Track these at each quarterly review:
| Metric | Target | What deterioration looks like |
|---|---|---|
| Inventory completeness | 90%+ tools have a named owner | Owner column blank on new additions |
| Vendor DPA coverage | 100% for tools touching PII | New tools onboarded without DPA |
| Policy acknowledgment rate | 80%+ of team | Below 70% means rollout is missing people |
| Incident log entries | At least 1 per quarter | Zero entries suggests under-reporting |
| Time to close governance gaps | Median under 30 days | Gaps over 60 days accumulate and cause debt |
| Fast-path approval time | Median under 24 hours | Long waits drive shadow adoption |
Present these as a one-slide summary at the end of each quarterly review. Trends matter more than absolute numbers — a slight improvement each quarter signals a healthy program.
Tooling options for running the rhythm
You do not need to buy GRC software to run this program. These tools work at the small-team stage:
Inventory and vendor records: Google Sheets or Notion table. Key columns: tool name, owner, data class, DPA status, last review date.
Policy documents: Google Docs or Confluence page with version history enabled. Always name the doc with the version number in the title (e.g. "AI Policy v2.1 — Jan 2026").
Incident log: A simple spreadsheet or Notion database. Columns: date, what happened, severity, who handled it, resolution. Link from the main inventory.
Weekly channel: Slack (or Teams) with a pinned template. New tool proposals follow the template: tool name, use case, data class, who is requesting, DPA status.
Calendar rituals: Google Calendar or Outlook with recurring events. Add the governance checklist link to the monthly and quarterly event descriptions so the agenda is always one click away.
When to upgrade: The trigger for dedicated GRC software is usually an enterprise sales requirement or a regulatory audit — not team size. Until then, the friction of adopting new tooling is almost always higher than the value it provides.
Adapting the rhythm for remote and distributed teams
The rhythm described here works in-person or remote with minor adjustments:
Weekly async channel: Requires discipline in a remote setting. Set a weekly reminder bot to prompt tool sponsors to post updates. Do not let async become "eventually."
Monthly 15-minute call: Resist the temptation to make this text-only. Even a brief video call surfaces questions and keeps the owner relationship active.
Quarterly session: This benefits most from being synchronous — either video or in-person. A 60-minute async doc review misses the discussion that surfaces risks a written agenda cannot anticipate.
Cross-timezone teams: If the team spans multiple time zones, consider splitting the quarterly into a written pre-read (owner distributes 48 hours before) followed by a shorter 30-minute sync focused on decisions only.
Maintaining the rhythm during rapid growth
Fast-growing teams are the most likely to let governance drift — and the most likely to regret it. When headcount doubles or the product scope expands significantly, three governance risks compound simultaneously:
The inventory goes stale faster. New hires arrive with existing tool habits. New product features require new AI integrations. The gap between what the inventory says and what the team is actually using widens faster than the monthly review cycle can close it.
Policy awareness drops. When 30% of the team is new, 30% of the team has not been onboarded into the governance program. Without explicit onboarding integration, the policy acknowledgment rate drops every time there is a hiring wave.
Ownership becomes unclear. In a 15-person team, the policy owner is obvious. In a 50-person team, the question of whether AI governance lives with Engineering, Legal, Security, or Operations becomes contested.
Adjustments for high-growth phases:
- Increase the weekly channel check-in frequency temporarily — daily posts during the first 30 days of a major hiring wave
- Add governance to the onboarding checklist immediately, not at the next quarterly review
- Explicitly reassign governance ownership when a key role grows: a new Head of Security, for example, should explicitly receive or decline governance ownership as part of their role definition
- If you add a new product capability that involves AI (a new model integration, a new automated decision feature), trigger an unplanned quarterly-format review for that capability within 30 days of launch
The rhythm as a communication asset
Governance documentation serves an audience beyond the internal team. External stakeholders who benefit from a visible, maintained rhythm:
Enterprise customers: Security questionnaires frequently ask "how often do you review your AI policy?" A dated quarterly review log with a specific agenda is a concrete answer — not just "we have a policy."
Investors: Due diligence processes for Series B+ companies increasingly include AI governance. A two-year changelog of quarterly reviews demonstrates institutional continuity, not just point-in-time compliance.
Regulators: If you are subject to EU AI Act deployer obligations, documented review cadence and artefact history demonstrates ongoing oversight — a specific requirement for high-risk AI applications.
Insurers: Cyber insurance applications are beginning to ask about AI tool governance. A documented program, even a lightweight one, supports underwriting and may affect premium rates.
None of this requires you to produce a 100-page compliance report. The four exportable artefacts — policy with version, inventory with owners, vendor archive, incident log — are sufficient for most of these audiences when they reflect an active, maintained program.
Key Takeaways
- Run four nested loops: weekly async, monthly check-in, quarterly reset, annual review — total under 6 hours per quarter per person
- Define three decision tiers (fast-path, standard, escalated) to prevent governance from becoming a bottleneck
- The four artefacts that matter are: policy with version, inventory with owners, vendor checklists, and incident log — exportable in 10 minutes
- Include governance in new-hire onboarding and document the "why" behind decisions, not just the decisions
- Common failures: cancelled quarterlies, empty incident logs, diluted ownership, box-checking reviews — address each explicitly
References
- National Institute of Standards and Technology — AI Risk Management Framework (AI RMF 1.0)
- European Parliament and Council — EU AI Act
- OECD — OECD AI Principles
- ISO — ISO/IEC 42001: AI Management Systems
- ENISA — AI Cybersecurity Standardisation
