A complete AI acceptable use policy for teams of 5–50. Copy the sections below into your team wiki. Fill in the bracketed fields. Have every employee acknowledge it. That is all that is required to have a documented AI governance policy.
This template meets the minimum requirements for GDPR compliance documentation, SOC 2 vendor risk assessments, and enterprise customer AI governance questionnaires. It takes about 30 minutes to customize and 30 minutes to roll out.
How to Use This Template
The policy has five sections. Customize the bracketed fields. Sections marked [REQUIRED] must be completed before the policy is usable. Sections marked [OPTIONAL] can be kept, removed, or expanded based on your team's situation.
The version and date at the top of the document matter for audit purposes — every time you update the policy, increment the version number and update the date.
THE POLICY (Copy Everything Below This Line)
AI Acceptable Use Policy
Company: [Company Name] Version: 1.0 Effective date: [Date] Policy owner: [AI Lead Name], [Title] Contact: [Email address for AI governance questions] Review cadence: Annual, or whenever a major AI vendor changes their data handling terms
1. Purpose
This policy establishes how [Company Name] employees may use artificial intelligence tools in their work. It defines which tools are approved, what uses are prohibited, and how to report incidents involving AI tools.
AI tools can accelerate work and improve quality. They also create risks — data exposure, inaccurate outputs, and regulatory non-compliance — when used without clear guidelines. This policy reduces those risks without eliminating the benefits.
2. Scope
This policy applies to all employees, contractors, and consultants of [Company Name] who use AI tools in connection with their work for the company, on company devices or personal devices.
3. Approved AI Tools [REQUIRED — customize this list]
The following AI tools are approved for use in the categories described. Use of any AI tool not on this list requires approval from [AI Lead Name] before use.
| Tool | Approved For | Data Restrictions | Notes |
|---|---|---|---|
| ChatGPT (OpenAI, via chat.openai.com) | Drafting, summarizing, research | No customer personal data. No confidential business data unless Business plan with data controls is active. | Free tier retains data for training. Business plan available. |
| Claude (Anthropic, via claude.ai) | Drafting, analysis, coding | No customer personal data on free/Pro plans. Claude Teams plan required for zero-retention. | |
| GitHub Copilot | Code completion, code review | No code containing hardcoded secrets, credentials, or PII | Requires GitHub Copilot Business or Enterprise for enterprise data controls |
| Grammarly | Grammar, spelling, editing | May be used with business documents. Avoid pasting full customer emails or PII-containing text. | |
| [Add additional approved tools] |
Tools requiring separate approval before use: Any AI tool not listed above. Contact [AI Lead Name] with the tool name, intended use case, and data that would be processed. Approval turnaround: 5 business days.
4. Prohibited Uses [REQUIRED]
The following uses of AI tools are prohibited regardless of which tool is used:
4.1 Data prohibitions
- Inputting customer personal data (names, emails, addresses, payment information, health data, or any data subject to GDPR, CCPA, or HIPAA) into any AI tool that does not have a signed Data Processing Agreement (DPA) with [Company Name]
- Inputting employee personal data into AI tools for HR, performance evaluation, or hiring decisions without explicit approval from [AI Lead Name] and legal counsel
- Inputting code containing hardcoded credentials, API keys, passwords, or secrets into any AI tool
- Inputting confidential business information (M&A plans, unreleased product details, financial projections) into consumer-tier AI tools with data retention
4.2 Output prohibitions
- Publishing or sending AI-generated content to external parties (customers, regulators, courts) without human review and verification of accuracy
- Using AI tools to generate professional advice (legal, medical, financial, accounting) for clients without expert human review and sign-off
- Using AI tools to create content that impersonates a real person
- Using AI tools to generate fake reviews, testimonials, or endorsements
- Representing AI-generated content as human-authored when authenticity matters to the recipient
4.3 Decision prohibitions
- Using AI tools to make final hiring decisions, performance ratings, or disciplinary decisions without documented human review
- Using AI tools to make credit or financial decisions affecting customers without documented human oversight and an adverse action process
5. Data Classification Rules [REQUIRED — adjust categories for your data types]
Before using any AI tool with company data, classify the data:
Class 1 — Public: Information that is or can be publicly disclosed. AI tools may process this data without restriction. Examples: Published blog posts, public product documentation, publicly available competitor analysis.
Class 2 — Internal: Non-public business information. Approved AI tools may process this data. Do not use consumer-tier tools (free ChatGPT, etc.) unless they contractually do not retain or train on input data. Examples: Internal meeting notes, draft documents, engineering design docs not containing secrets, sales strategies.
Class 3 — Confidential: Sensitive business information. Only AI tools with signed DPAs and zero-retention commitments may process this data. Requires [AI Lead] approval for each use case. Examples: Unreleased product roadmaps, M&A activity, financial data, vendor contracts, employee performance reviews.
Class 4 — Restricted: Personal data subject to GDPR, CCPA, HIPAA, or similar laws; payment card data; credentials and secrets. AI tools may NOT process this data unless a DPA is in place AND legal counsel has reviewed the use case. Examples: Customer names and emails, employee records, health information, passwords and API keys, payment card numbers.
When in doubt: Treat the data as one class higher than you think it is.
6. Incident Reporting [REQUIRED]
An AI incident is any event in which:
- Customer, employee, or other personal data was inputted into an AI tool without authorization or a signed DPA
- An AI tool produced an output that was sent to a customer, regulator, or third party and contained material factual errors
- An AI tool produced content that violates this policy and was published or shared externally
- Credentials, API keys, or secrets were inputted into an AI tool
How to report:
- Stop the activity immediately
- Contact [AI Lead Name] at [email] within 24 hours of discovering the incident
- Do not delete the evidence — preserve the conversation, output, or log
- Complete the incident report form at [link to incident report form or doc]
What happens after you report: [AI Lead Name] will assess the incident within 48 hours. If personal data was exposed to a vendor without a DPA, the legal and compliance team will determine whether breach notification is required under GDPR (72-hour deadline to notify the supervisory authority), CCPA, or applicable state laws. You will not be penalized for good-faith reporting.
7. AI-Generated Content Disclosure [OPTIONAL — required if you publish AI-generated content externally]
[Company Name] may use AI tools to assist in creating content for external audiences. The following disclosure standards apply:
- Marketing content: AI may be used to draft content; human review and editing is required before publication. AI-generated content does not require a disclosure label unless it is clearly AI-generated and presented as human-authored expert opinion.
- Customer communications: AI may assist in drafting responses; a human must review and approve before sending. Automated AI responses to customer queries must disclose that a response was generated or reviewed by AI if the customer explicitly asks.
- Legal or regulatory filings: AI may not generate final content for legal or regulatory filings without explicit attorney review and sign-off.
- Amazon KDP and publishing platforms: Follow the platform's specific AI disclosure requirements. See KDP AI disclosure policy if applicable.
8. Employee Responsibilities
Every employee is responsible for:
- Reading this policy before using any AI tool for work purposes
- Following the data classification rules — when in doubt, ask [AI Lead Name] before proceeding
- Reporting incidents promptly using the procedure in Section 6
- Completing AI governance training as scheduled by [AI Lead Name]
- Not approving AI tools for team use without [AI Lead Name] authorization
9. Enforcement
Violations of this policy may result in disciplinary action up to and including termination of employment, depending on severity and intent. Unintentional violations that are promptly reported and handled in good faith will be treated as learning opportunities, not disciplinary matters.
10. Employee Acknowledgment [REQUIRED]
By signing below [or clicking the acknowledgment checkbox in [HR system]], I confirm that:
- I have read and understood the [Company Name] AI Acceptable Use Policy
- I understand which AI tools are approved and under what conditions
- I understand the data classification rules and which data types may not be processed by AI tools
- I understand how to report an AI incident
- I agree to comply with this policy
Name: ________________________________ Signature / Acknowledgment date: ________________________________ Department: ________________________________
END OF POLICY TEMPLATE
Customization Guide
Minimum required customizations:
- Company name throughout
- Approved tools list (Section 3) — audit which tools your team actually uses
- AI Lead name and contact email
- Effective date
- Incident report form link (can be a Google Form to start)
- Data classification examples that match your actual data
Optional additions for regulated industries:
- HIPAA: Add a Section 11 covering AI and PHI handling, BAA requirements for AI vendors
- Financial services: Add automated decision disclosure obligations under ECOA and CFPB guidance
- EU operations: Reference the EU AI Act high-risk classification and your conformity assessment status
What to do if you have no AI lead yet: Assign one before publishing the policy. It can be a part-time role. The AI lead does not need to be a compliance specialist — a senior engineer or operations manager with 2–3 hours/week is sufficient for a team under 25.
Use the AI Governance Checklist to verify you have all six governance areas covered beyond just the policy document. For tracking which AI tools your team uses, the AI Tool Register Template gives you a Notion-ready database with the right fields.
