AI Policy Desk · Guides

How to Do an AI Risk Assessment (Small Team Guide)

A step-by-step guide to running your first AI risk assessment — without a risk team. Covers use-case mapping, likelihood scoring, and key controls.

Back to blog

How to Do an AI Risk Assessment (Small Team Guide)

A risk assessment does not require a risk team. This guide walks you through a lightweight version that a single person can complete in a half-day — and that gives you a prioritised list of controls to act on.

What you are assessing

An AI risk assessment covers three layers:

  1. Data risk — what data could be exposed via AI tool usage
  2. Output risk — what harm could result from acting on bad AI output
  3. Operational risk — what happens if the tool disappears, changes pricing, or gets compromised

Step 1 — Map your AI use cases

List every AI tool your team uses (including shadow AI — tools people use without formal approval). For each tool, note:

Tool Use case Data sensitivity User count
ChatGPT (team tier) Drafting, summarising Low–Medium 8
GitHub Copilot Code completion Medium (sees codebase) 3
[Internal LLM] Customer query routing High (sees PII) 1 (automated)

Tip: Run a quick Slack poll or 1:1s to find shadow tools. You will almost always discover two or three not on the official list.

Step 2 — Score each use case

Rate each use case on two dimensions:

Likelihood of harm (1–3):

Impact if harm occurs (1–3):

Risk score = Likelihood × Impact

Score Priority
6–9 High — act now
3–5 Medium — address this quarter
1–2 Low — monitor

Step 3 — Identify controls for high-priority risks

For each high-priority risk, assign one concrete control. Examples:

Risk Control
PII in customer query AI Implement PII scrubbing before input; human review of flagged outputs
Credentials in Copilot context Add .env and secrets files to .gitignore; add pre-commit hook
Shadow AI with customer data Policy + quarterly shadow AI scan; fast approval channel for new tools
Over-reliance on legal output Policy: all AI-drafted legal content requires lawyer sign-off

One control per risk is enough to start. Perfect coverage later beats no coverage now.

Step 4 — Document and assign owners

For each control:

Use a simple spreadsheet. You do not need risk management software.

Step 5 — Schedule your next review

Add a calendar reminder for:

Common mistakes to avoid

Skipping shadow AI. The tools you don't know about cause the incidents. Spend 30 minutes finding them before you write a single control.

Treating low-likelihood risks as zero. A 1-in-20 chance of a GDPR breach is not zero. Score it, assign a control, and move on.

Making controls too vague. "Train the team" is not a control. "All employees confirm AI policy annually with a signed acknowledgment" is.

Not assigning owners. A control without a named owner does not get implemented.

Output of a good risk assessment

At the end of this process you should have:

That is your AI governance foundation. Everything else — policies, checklists, vendor reviews — plugs into this register.