AI Incident Response Playbook for Small Teams
AI incidents are different from traditional software bugs. They often involve data exposure, wrong information shipped as fact, or misuse — not just downtime. This playbook gives you a repeatable response, sized for a team without a dedicated security function.
What counts as an AI incident?
- Confidential data pasted into an unapproved AI tool
- PII, credentials, or regulated data sent to a consumer-tier service
- AI-generated output shipped to customers that contained factual errors or harmful content
- An employee or contractor using a banned tool on company work
- Model output used to make a high-stakes decision without human review
Near-misses count too. Log them. They show where your policy has gaps before something worse happens.
Phase 1 — Contain (first 30 minutes)
- Stop the bleeding. If data is still actively being exposed, cut access first. Revoke the API key, kill the session, or remove the file.
- Document what you know. Screenshot, log, or write down: what happened, who was involved, what data may have been exposed, and when you found out.
- Don't delete evidence. Preserve logs and conversation exports — you may need them for a regulatory notification or legal review.
- Notify the policy owner. One person needs to own the response from this point.
Phase 2 — Assess (first 2 hours)
Answer these four questions:
| Question | Why it matters |
|---|---|
| What data was exposed? | Determines regulatory obligation |
| How many people are affected? | Scopes customer notification |
| Is the exposure ongoing or closed? | Determines urgency |
| What was the root cause? | Feeds into the control change |
Regulatory check: If personal data was involved, check your GDPR/HIPAA/CCPA obligations now. GDPR gives you 72 hours to notify your supervisory authority if the breach is reportable.
Phase 3 — Communicate (within 24 hours)
- Internal: Tell relevant stakeholders (legal, leadership, affected team leads) with facts only — avoid speculation.
- Customer notification: If customer data was exposed, draft a plain-language notification. Run it past legal before sending.
- Regulatory: File the notification if required. Document the date and reference number.
Keep communication factual and concise. Do not admit liability or speculate about impact before the assessment is complete.
Phase 4 — Recover and learn (within 1 week)
- Root cause identified and documented
- One concrete control added or changed (not just "train the team")
- Policy updated if a gap was exposed
- Incident logged in your AI incident register (even if it was minor)
- Post-incident review held (15 minutes is enough for small teams)
Incident register template (copy into a spreadsheet)
| Date | Reporter | Tool involved | Data type | Severity (1–3) | Status | Control change |
|---|---|---|---|---|---|---|
| 2026-01-15 | J. Smith | ChatGPT (consumer) | Customer name + email | 2 | Closed | Blocked consumer tier in policy |
Severity guide
| Level | Description | Response time |
|---|---|---|
| 1 — Critical | PII, credentials, or regulated data confirmed exposed externally | Immediate — escalate now |
| 2 — Significant | Internal data exposed to unapproved service; no confirmed external exposure | Within 2 hours |
| 3 — Minor | Policy violation, no sensitive data, near-miss | Within 24 hours |
Keep the playbook short. The goal is that anyone on the team can follow it on a stressful day without reading a 40-page manual.