How to Do an AI Risk Assessment (Small Team Guide)
A risk assessment does not require a risk team. This guide walks you through a lightweight version that a single person can complete in a half-day — and that gives you a prioritised list of controls to act on.
What you are assessing
An AI risk assessment covers three layers:
- Data risk — what data could be exposed via AI tool usage
- Output risk — what harm could result from acting on bad AI output
- Operational risk — what happens if the tool disappears, changes pricing, or gets compromised
Step 1 — Map your AI use cases
List every AI tool your team uses (including shadow AI — tools people use without formal approval). For each tool, note:
| Tool | Use case | Data sensitivity | User count |
|---|---|---|---|
| ChatGPT (team tier) | Drafting, summarising | Low–Medium | 8 |
| GitHub Copilot | Code completion | Medium (sees codebase) | 3 |
| [Internal LLM] | Customer query routing | High (sees PII) | 1 (automated) |
Tip: Run a quick Slack poll or 1:1s to find shadow tools. You will almost always discover two or three not on the official list.
Step 2 — Score each use case
Rate each use case on two dimensions:
Likelihood of harm (1–3):
- 1 = Low: tool handles only public/internal non-sensitive data
- 2 = Medium: tool sees confidential internal data
- 3 = High: tool sees customer PII, regulated data, or credentials
Impact if harm occurs (1–3):
- 1 = Low: embarrassment or minor rework
- 2 = Medium: customer complaint, contractual breach, or significant rework
- 3 = High: regulatory breach, data breach notification, legal liability
Risk score = Likelihood × Impact
| Score | Priority |
|---|---|
| 6–9 | High — act now |
| 3–5 | Medium — address this quarter |
| 1–2 | Low — monitor |
Step 3 — Identify controls for high-priority risks
For each high-priority risk, assign one concrete control. Examples:
| Risk | Control |
|---|---|
| PII in customer query AI | Implement PII scrubbing before input; human review of flagged outputs |
| Credentials in Copilot context | Add .env and secrets files to .gitignore; add pre-commit hook |
| Shadow AI with customer data | Policy + quarterly shadow AI scan; fast approval channel for new tools |
| Over-reliance on legal output | Policy: all AI-drafted legal content requires lawyer sign-off |
One control per risk is enough to start. Perfect coverage later beats no coverage now.
Step 4 — Document and assign owners
For each control:
- Name the owner (a person, not a team)
- Set a deadline (30, 60, or 90 days)
- Define how you will verify it is in place
Use a simple spreadsheet. You do not need risk management software.
Step 5 — Schedule your next review
Add a calendar reminder for:
- Monthly: check for new tools or use cases added since last review
- Quarterly: re-score existing risks with updated information
- Annually: full assessment refresh
Common mistakes to avoid
Skipping shadow AI. The tools you don't know about cause the incidents. Spend 30 minutes finding them before you write a single control.
Treating low-likelihood risks as zero. A 1-in-20 chance of a GDPR breach is not zero. Score it, assign a control, and move on.
Making controls too vague. "Train the team" is not a control. "All employees confirm AI policy annually with a signed acknowledgment" is.
Not assigning owners. A control without a named owner does not get implemented.
Output of a good risk assessment
At the end of this process you should have:
- A use-case inventory (4–20 rows for most small teams)
- A risk register with scores and priorities
- A control backlog with owners and deadlines
- A review schedule
That is your AI governance foundation. Everything else — policies, checklists, vendor reviews — plugs into this register.