Before signing any AI tool contract, run it through this 30-question checklist. Questions are organized by risk area. Each has a pass/fail criterion and a note on how vendors typically respond. The full checklist takes 45–90 minutes per vendor for a medium-risk tool.
How to use this checklist:
- Low-risk tools (no personal data, outputs reviewed before use): Sections 1 and 3 only
- Medium-risk tools (business data, workflow integration): Sections 1–4
- High-risk tools (personal data at scale, consequential decisions, regulated industries): All five sections plus legal review of Section 5
Section 1: Security Posture
| # | Question | Pass Criterion | Vendor Deflection Risk |
|---|---|---|---|
| 1 | Does the vendor have a SOC 2 Type II report? | Yes — report issued within 12 months | High — vendors often cite "in progress" indefinitely |
| 2 | Is the SOC 2 report available for customer review (under NDA)? | Yes — full report, not just summary | Medium — some share only attestation letters |
| 3 | What was the last penetration test date? | Within 12 months, by a named third party | High — many vendors won't share pentest reports |
| 4 | Is there a vulnerability disclosure policy? | Yes — public URL exists | Low |
| 5 | How is customer data encrypted at rest and in transit? | AES-256 at rest, TLS 1.2+ in transit minimum | Low |
| 6 | What is the vendor's incident response SLA for breach notification? | 72 hours or less (GDPR standard) | Medium — SaaS contracts often say "reasonable time" |
Key insight: If a vendor cannot provide a SOC 2 Type II report within 2 weeks of request, treat it as a red flag. Type I reports (design of controls) are significantly weaker than Type II (operating effectiveness). Reject Type I alone for any medium or high-risk tool.
Section 2: Data Handling and AI Training
This is the section where AI vendor practices differ most significantly from standard SaaS due diligence.
| # | Question | Pass Criterion | Vendor Deflection Risk |
|---|---|---|---|
| 7 | Is customer data used to train or fine-tune the vendor's AI models? | No — with written confirmation | Very High — default answer is often "yes" without an opt-out mechanism |
| 8 | Is there a training opt-out available? | Yes — opt-out confirmed in contract or DPA | High — opt-out may require enterprise tier or explicit request |
| 9 | What is the data retention period for inputs and outputs? | Specified in contract — ideally 30 days or less for prompts | Medium |
| 10 | Who has access to customer data within the vendor organization? | Named roles — support, security, engineering (if any) | Medium |
| 11 | Are customer data and other customers' data isolated? | Yes — logical or physical separation confirmed | Low for cloud services |
| 12 | What happens to customer data on contract termination? | Deleted within 30 days, written confirmation provided | Medium |
| 13 | Does the vendor share customer data with third parties for purposes other than service delivery? | No — with DPA confirming this | Low if DPA exists |
The training data question is the most important. Many vendors — including major AI platforms — default to using user inputs to improve their models unless you actively opt out. This is stated in their terms of service but rarely in their sales conversations. Ask directly: "Will our prompts and outputs be used for model training?" Get the answer in writing in the DPA.
Red flag list for this section:
- "We anonymize your data before using it for training" — anonymization does not prevent disclosure concerns; it also varies widely in quality
- "Our models are trained on public data only" — this refers to the base model, not fine-tuning on customer inputs
- "You can delete your data any time" — this is not the same as a contractual retention limit
Section 3: Legal and Contract Terms
| # | Question | Pass Criterion | Vendor Deflection Risk |
|---|---|---|---|
| 14 | Is a Data Processing Agreement (DPA) available? | Yes — standard DPA available or negotiable | Low for established vendors, high for startups |
| 15 | Does the DPA identify the vendor as a data processor (not controller)? | Yes — controller/processor distinction is explicit | Medium |
| 16 | What is the liability cap for AI-generated errors or hallucinations? | Specified — ideally limited to fees paid, not excluded entirely | High — most SaaS contracts exclude all AI output liability |
| 17 | Does the contract include a subprocessor list? | Yes — with requirement to notify of changes | Medium |
| 18 | What are the terms for using vendor-generated outputs commercially? | Confirmed customer ownership of outputs | Low for most AI tools |
| 19 | Are there restrictions on industries or use cases? | Known restrictions documented | Low — but unread ToS has caught teams out |
| 20 | What is the contract exit clause? | Data portability and deletion within 30–60 days | Medium |
On AI liability: Almost every standard AI vendor contract excludes liability for incorrect, harmful, or misleading AI outputs. This is expected and not unusual — it is the baseline. What matters is whether you have documented this exclusion internally and have appropriate human review processes for AI outputs that affect customers, employees, or compliance. The absence of vendor liability shifts responsibility to your team.
Section 4: Compliance Documentation
| # | Question | Pass Criterion | Vendor Deflection Risk |
|---|---|---|---|
| 21 | Is the vendor GDPR-compliant if processing EU personal data? | Yes — DPA includes SCCs or adequacy decision where needed | Low for established vendors |
| 22 | Is the vendor CCPA-compliant if processing California resident data? | Yes — business associate agreement or equivalent | Medium |
| 23 | Is a HIPAA Business Associate Agreement (BAA) available if processing PHI? | Yes — BAA offered for healthcare use cases | Medium — many vendors don't offer BAAs at all |
| 24 | Has the vendor performed a Data Protection Impact Assessment (DPIA) for high-risk AI processing? | Yes — available on request | High — most vendors have not done this |
| 25 | Does the vendor's AI system fall under the EU AI Act? If so, what is its risk classification? | Risk classification documented | High — most vendors have not formally classified |
HIPAA note: If your team handles protected health information (PHI), you need a BAA before using any AI tool with that data. Many major AI vendors (OpenAI's ChatGPT.com, Claude.ai) do not offer BAAs for consumer-tier plans. OpenAI Enterprise, Microsoft Azure OpenAI, Google Cloud Vertex AI, and AWS Bedrock all offer BAA-eligible configurations. The API tier is often BAA-eligible even when the web interface is not.
Section 5: Operational Reliability
| # | Question | Pass Criterion | Vendor Deflection Risk |
|---|---|---|---|
| 26 | What is the vendor's uptime SLA? | 99.9% or better, with financial remedy for breaches | Low |
| 27 | Does the vendor notify customers of model version changes? | Yes — versioning policy and changelog available | High — many AI vendors change models silently |
| 28 | What is the API rate limit and what happens when it is exceeded? | Documented limit with graceful degradation | Low |
| 29 | What is the vendor's policy on model deprecation? | Minimum 6-month notice before deprecating a model version | High — critical for production workflows |
| 30 | Does the vendor have a disaster recovery plan? | Yes — RTO and RPO documented | Medium |
Model versioning is the underrated risk. AI vendors routinely update the underlying model that powers their API without announcing the change. A model update can change output style, accuracy, or behavior in ways that break your workflows or compliance expectations. Before integrating any AI tool into production processes, confirm the vendor's versioning and notification policy. For critical workflows, pin to a specific model version if the API supports it.
Scoring and Decision Framework
After completing the relevant sections:
| Score | Recommendation |
|---|---|
| 0 fail answers | Proceed — complete DPA and add to approved tools list |
| 1–2 fail answers | Negotiate — most fails can be resolved in contract language |
| 3–4 fail answers | Escalate — legal review required before proceeding |
| 5+ fail answers | Reject or require major remediation before any use |
The 3 questions that are automatic blockers (no mitigation possible):
- Q7: Data used for training with no opt-out available — reject for any sensitive data use
- Q14: No DPA available and vendor refuses to provide one — reject for any personal data processing
- Q23: No BAA available for healthcare data — reject for any PHI use; no workaround exists
Vendor Assessment Record
For each vendor assessment, document:
| Field | Entry |
|---|---|
| Vendor name | |
| Tool / product assessed | |
| Date of assessment | |
| Assessor | |
| Risk tier (Low / Medium / High) | |
| Sections completed | |
| Fail count | |
| DPA signed (Y/N/Pending) | |
| Training opt-out confirmed (Y/N/N/A) | |
| Decision | Approved / Rejected / Conditional |
| Conditions / remediation required | |
| Next review date |
Store one record per vendor per assessment cycle. For high-risk vendors, reassess quarterly. For medium-risk, annually or when the vendor announces material changes to its terms or data practices.
Ready to assess your current AI tool stack? Use the AI Risk Assessment Tool to rate your existing tools against these criteria. The Vendor Scorecard lets you run side-by-side comparisons across multiple vendors on the questions that matter most.
