CEO's AI Tool Approval Checklist: 10 Questions Before You Say Yes
Your CTO wants to roll out Cursor. A dev is already using ChatGPT for everything. Someone installed Notion AI last week without asking.
You have a decision to make — and not much framework for making it.
This checklist is 10 questions. It takes 30 minutes. It covers the things that cause real problems: source code leaving your environment, data used to train competitor models, no way to audit what happened after an incident.
Run it once per tool before you approve. Keep the answers in a doc.
Before you start
If you do not have a list of which tools your team is currently using, start there. Shadow AI — tools people use without approval — is more common than most CEOs think. See Shadow AI: What It Is and How to Prevent It before running this checklist.
Question 1: Is this tool using our data to train their model?
This is the question most people ask first — and rightly so. If a tool trains on your code or prompts, your internal logic, architecture decisions, and proprietary patterns are potentially showing up in outputs for other users.
How to check:
- Look for "Data Controls," "Privacy Settings," or "Training" in account settings.
- Look for a Data Processing Agreement (DPA) — enterprise plans usually have one.
What each major tool does by default:
| Tool | Default | How to opt out |
|---|---|---|
| Cursor (Personal) | ON | Upgrade to Business plan |
| Cursor (Business) | OFF | Already off |
| ChatGPT (Free/Plus) | ON | Settings → Data Controls → toggle off |
| ChatGPT (Team/Enterprise) | OFF | Already off |
| Claude (Consumer) | May be used for safety training | Use API or Team plan |
| Claude (API / Team) | OFF | Already off |
| Notion AI | OFF | Per policy |
| NotebookLM | Check current policy | Settings |
Action: Verify, screenshot the setting, and save it in your approved tools doc.
Question 2: Does source code leave our environment?
This is different from question 1. Even if a tool does not train on your data, code may still transit their servers. That matters because: if their infrastructure is breached, your code is exposed. If you have NDA obligations with clients, transmitting their code to a third party may violate the agreement.
Reality check by tool:
- Cursor / GitHub Copilot: Yes — code context is sent to servers on every completion request. This is how the product works. There is no offline mode.
- ChatGPT: Only if your dev pastes code into the chat manually.
- Claude via API: Yes, but Anthropic does not retain prompts after the request completes (per current policy).
- Copilot with local models (Ollama, etc.): Code stays local.
Action: Decide your risk tolerance. For most startups without regulated client data, Cursor Business is an acceptable risk. For teams handling client source code under NDA — get explicit written confirmation from the vendor or run a local model.
Question 3: How long is our data retained? Can it be deleted?
Even if a tool does not train on your data, it may store prompts and outputs for days or weeks. That data is a liability: it can be subpoenaed, breached, or handed to authorities.
Retention by tool (approximate — verify with current policy):
- Cursor: Up to 30 days, deletion request available.
- ChatGPT: Until you delete the conversation or account.
- Claude API: No retention after request completes.
- Notion AI: Processed but not stored separately per policy.
Action: Find the "Data Retention" section in each tool's Privacy Policy. If you cannot find a clear answer in 5 minutes — that is a red flag. Escalate to vendor support before approving.
Question 4: If there is a breach, how fast do they notify us?
Most CEOs skip this question. It is the most important one after a breach happens.
GDPR requires notification within 72 hours. But that obligation is on vendors only if they process EU personal data and you are in scope. For a Vietnamese startup with no EU customers, you likely have no legal guarantee of notification timing.
What to look for:
- "Security Incident Notification" in Terms of Service.
- "Breach Notification" in the DPA.
Action: Search the Terms of Service for "incident" or "breach." If there is no clause — document that you have no contractual notification right and decide if that is acceptable.
Question 5: Are your devs using personal accounts for work?
This is the most common problem, and it is easy to miss.
A dev using their personal ChatGPT free account for work tasks means:
- Training is ON by default (you cannot turn it off on their behalf).
- No audit log — you cannot see what was shared.
- If they leave the company, you cannot revoke access or retrieve history.
Action:
- Ask directly: "Is anyone using a personal AI account for work?"
- Mandate company accounts for all work-related AI use.
- Add this to your onboarding checklist.
Question 6: Do we have audit logs?
If something goes wrong — a dev shares a client's credentials in a prompt, or sensitive code ends up somewhere unexpected — can you reconstruct what happened?
Audit log availability:
| Tool | Personal/Free | Team/Business | Enterprise |
|---|---|---|---|
| ChatGPT | No | Limited | Yes |
| Cursor | No | Basic | Yes |
| Claude | No | Via API logs | Yes |
| Notion AI | No | Workspace logs | Yes |
Action: If you are on a personal or free plan — you are operating without a safety net. For any tool used with company data, upgrade to a plan with audit logs. The cost is usually $20–40/user/month. The cost of not having logs during an incident is much higher.
Question 7: Can we fully off-board if we stop using the tool?
Before you commit to a tool, test the exit.
Create a test account. Use it briefly. Then request account deletion and data erasure. See what happens. Some vendors confirm deletion within days. Others take weeks, require a support ticket, or have unclear policies.
Action: Do this test before company-wide rollout. If you cannot get a clear confirmation of data deletion — assume the data stays indefinitely.
Question 8: Does the tool have SOC 2 or ISO 27001 certification?
These certifications mean an independent auditor has reviewed the vendor's security practices. They are not perfect, but they are a meaningful baseline signal.
Status of common tools:
| Tool | SOC 2 Type II | ISO 27001 |
|---|---|---|
| Cursor | Yes | Check current |
| OpenAI (ChatGPT) | Yes | In progress |
| Anthropic (Claude) | Yes | Check current |
| Notion | Yes | Yes |
| Google (NotebookLM) | Yes | Yes |
Action: Check the vendor's Trust or Security page. No certification does not mean the tool is insecure — many good tools are pre-certification. But it means you are trusting their self-assessment, not an external audit.
Question 9: Who owns the output the AI generates?
Your devs are generating code, documents, and analysis using AI. In most cases, the vendor's terms say you own the output. But there are nuances worth knowing.
Current policies:
- Cursor, ChatGPT, Claude: You own the output per current Terms of Service.
- Exception: Do not use AI to generate content that could infringe on third-party IP (e.g., training data with copyrighted code).
Unresolved question: If two companies independently generate nearly identical code using the same AI tool, who owns it? There is no clear case law yet.
Action: Read the IP ownership clause in Terms of Service before using AI for anything you plan to patent or treat as a core trade secret.
Question 10: Does your team know what you just learned?
The best policy in the world does not help if only the CEO has read it.
Most security incidents involving AI tools come down to a dev not knowing the rules — not malice, just ignorance. They did not know their personal ChatGPT had training ON. They did not know pasting a client's API keys into a prompt was a problem.
Action:
- Run a 30-minute team briefing. Walk through this checklist together.
- Create a one-page "Approved AI Tools" doc: which tools are approved, which are prohibited, what account type to use.
- Add it to your onboarding checklist for new hires.
Template: AI Acceptable Use Policy for Small Teams
After the checklist: what to document
Keep a simple doc with these columns:
| Tool | Approved? | Plan/Tier | Training OFF? | Audit Logs? | Last reviewed |
|---|---|---|---|---|---|
| Cursor | Yes | Business | Yes | Basic | 2026-04-05 |
| ChatGPT | Yes | Team | Yes | Limited | 2026-04-05 |
| Claude | Yes | API | Yes | Via logs | 2026-04-05 |
| Notion AI | Yes | Team | Yes | Workspace | 2026-04-05 |
Review this doc every 6 months, or when a vendor announces a policy change.
Related resources
- AI Acceptable Use Policy Template — what to tell your team in writing
- Shadow AI: What It Is and How to Prevent It — find tools that are already in use
- AI Vendor Evaluation Checklist — deeper due diligence for paid contracts
- AI Tool Register Template — track all approved tools in one place
