SOC 2 Type II audits increasingly include questions about AI tool usage. If your team uses ChatGPT, Claude, Copilot, or any other AI tool, and you have not explicitly scoped this into your control environment, you have a gap.
The good news: AI tools do not require a new control framework. They slot into your existing SOC 2 program — primarily through vendor risk management, data transmission controls, and your acceptable use policy. This guide covers exactly which criteria are affected, what evidence you need, and what your AI AUP must say to satisfy auditors.
Why SOC 2 Auditors Are Now Asking About AI
Two years ago, AI tools were a footnote in SOC 2 audits. Now they are a standard line of inquiry. The reason is straightforward: employees routinely send company data — customer information, internal documents, source code, financial data — to external AI providers. From a SOC 2 perspective, this is a third-party data transmission, and it must be controlled and evidenced like any other.
Auditors check for three things:
- Does the organization know which AI tools employees are using?
- Are those tools assessed for vendor risk?
- Is there a policy preventing unauthorized data from being submitted?
If the answer to any of these is "we handle it informally," you have a finding waiting to happen.
Which Trust Service Criteria AI Affects
| Criterion | What It Covers | AI Risk | Evidence Needed |
|---|---|---|---|
| CC6.1 | Logical access controls | Who can access AI tools and with what data | Approved AI tools list, access provisioning records |
| CC6.7 | Transmission and disposal | Data sent to AI providers | DPA or data processing terms with each AI vendor |
| CC6.8 | Unauthorized disclosure | AI output containing confidential data | AI AUP, training records, output review requirements |
| CC9.2 | Vendor monitoring | AI vendors as third parties | Vendor risk assessments, annual review documentation |
| A1.1 | Availability | Dependency on AI tool uptime | Business impact assessment if AI is product-critical |
For most SaaS companies, CC6.7 and CC9.2 are the primary focus areas. CC6.1 matters if AI tool access is role-restricted (e.g., only engineers can use Copilot, not support staff).
The 3-Step Process: Inventory, Risk Tier, Document
Step 1: Build Your AI Tool Inventory
List every AI tool in active use across your organization. Include:
- Approved tools: those your IT or security team has reviewed
- Shadow AI: tools employees use that have not been formally approved (run a survey or check SSO logs)
For each tool, capture: tool name, vendor, plan/tier, what data is processed, and who uses it.
Step 2: Assign a Risk Tier
Not all AI tools carry the same risk. Tier them:
Tier 1 (Low risk): AI tools that process no company data — e.g., a coding autocomplete tool with telemetry disabled, an image generation tool used only with placeholder content.
Tier 2 (Medium risk): AI tools that receive internal company data but not customer personal data — e.g., an internal knowledge base AI, a meeting summarizer for internal calls only.
Tier 3 (High risk): AI tools that receive customer personal data, financial data, or regulated data — e.g., a customer support AI that sees ticket content, a sales AI with CRM access.
Tier 3 tools require a signed DPA, documented retention limits, and potentially additional contractual controls. Tier 1 tools may require only a brief risk note. Tier 2 sit in between.
Step 3: Document and Link to Controls
Create a vendor risk record for each Tier 2 and Tier 3 AI tool in your vendor management system (Vanta, Drata, Secureframe, or a spreadsheet). Include:
- Vendor name and tool
- Risk tier
- DPA signed? (Yes/No/Link)
- SOC 2 report available? (Y/N — most major AI providers publish one)
- Last review date
- Control owner
This documentation is what your auditor reads. It demonstrates CC9.2 compliance directly.
What Your AI Acceptable Use Policy Needs to Say
Your existing AUP may reference internet use, email, and software installation. AI tools need a dedicated section. At minimum, for SOC 2 purposes, your AI AUP should cover:
1. Approved tools list. Name the tools employees are authorized to use. "Any AI tool the employee finds useful" is not an answer auditors accept.
2. Data classification rules. Specify which data categories are permitted in each tool tier. Example: "Tier 2 AI tools: internal documents permitted; customer personal data prohibited. Tier 3 AI tools: permitted only with a signed DPA and security review."
3. Output review requirements. AI-generated code, customer communications, and compliance documents must be reviewed by a human before use. State this explicitly.
4. Incident reporting. If an employee accidentally submits prohibited data to an AI tool, how should they report it? Link to your incident response process.
5. Control owner. Name the role responsible for AI tool governance (typically Security, IT, or a designated AI lead).
The AI AUP does not need to be long. Two pages covering these five points is enough. What matters is that it exists, is dated, and employees have acknowledged it.
The Evidence Package for Your Auditor
When your SOC 2 audit includes AI tool questions, you should be able to produce:
| Evidence | What It Demonstrates |
|---|---|
| AI tools inventory | You know what tools are in use (CC6.1, CC9.2) |
| Risk tier assessments | Tools are assessed for vendor risk (CC9.2) |
| Signed DPAs (for Tier 3 tools) | Data transmission is governed (CC6.7) |
| AI AUP (signed by employees) | Unauthorized disclosure controls exist (CC6.8) |
| Employee training records | Awareness of AI data handling rules |
| Annual vendor review records | Ongoing monitoring of AI vendors (CC9.2) |
Most of this evidence overlaps with documentation you already maintain. The AI-specific additions are the tools inventory, risk tiers, and the AI AUP section.
Common Mistakes That Create Findings
Not disclosing shadow AI tools. If employees use AI tools that are not in your inventory and an auditor discovers this through interviews, it is a control gap — even if the tool itself is low-risk.
Generic AUP language. "Employees should use AI tools responsibly" does not satisfy CC6.8. Auditors want specific data classification rules.
Treating AI vendors as low-risk by default. Any vendor that receives company data requires a risk assessment, regardless of how popular they are.
Relying on vendor's SOC 2 report as your entire control. Your AI vendor having SOC 2 Type II covers their environment. It does not cover your data handling decisions (what data you choose to send, who has access, your retention practices).
Ready to assess your AI tool risk? Use the AI Risk Assessment to rate each use case from Low to Critical. Then use the Policy Generator to create an AI acceptable use policy that includes the SOC 2-relevant clauses your auditors need.
