Copy this policy. Fill in 5 fields. Done in under 30 minutes.
The five fields to replace are marked with [BRACKETS]:
[COMPANY NAME][EFFECTIVE DATE][POLICY OWNER NAME AND TITLE][APPROVED TOOLS]— your specific approved AI tools[INDUSTRY-SPECIFIC PROHIBITIONS]— uses prohibited in your sector
The Policy Template
[COMPANY NAME] — AI Acceptable Use Policy
Version: 1.0
Effective: [EFFECTIVE DATE]
Owner: [POLICY OWNER NAME AND TITLE]
Review cycle: Annual (or when a significant new AI tool is deployed)
Purpose
This policy defines how employees at [COMPANY NAME] may use AI tools and automated systems in their work. It protects the company, our customers, and our data while enabling the productivity benefits of AI.
This policy applies to all employees, contractors, and third parties who access [COMPANY NAME] systems or handle company data.
Section 1: Approved AI Tools
The following AI tools are approved for business use. All other AI tools require written approval from [POLICY OWNER NAME AND TITLE] before use.
Approved tools and permitted tiers:
[APPROVED TOOLS — copy and customize this table]
| Tool | Approved plan | Permitted uses | Conditions |
|---|---|---|---|
| [Tool name] | [Business/Enterprise/API] | [Drafting, coding, summarizing, etc.] | [e.g., must be logged into company account] |
| [Tool name] | [Business/Enterprise/API] | [Specific use cases] | [Any conditions] |
Not approved without prior review:
- Personal or consumer-tier AI accounts (e.g., free ChatGPT, personal Claude.ai, personal GitHub Copilot) used for company work
- AI tools embedded in third-party SaaS products not on this list
- Browser extensions that send page content to AI services
If you are unsure whether a tool is approved, ask [POLICY OWNER NAME AND TITLE] before using it.
Section 2: Data Prohibitions
The following data must never be entered into any AI tool, including approved tools, unless a specific exception is granted in writing:
Absolute prohibitions:
- Passwords, API keys, credentials, tokens, or secrets — active or expired
- Customer personally identifiable information (PII): names, email addresses, phone numbers, account numbers, payment card data, or any other data that identifies a specific customer
- Employee personal data (beyond what is strictly necessary for an explicitly approved HR use case)
- Proprietary source code, algorithms, or technical specifications covered by an NDA with a client or partner
- Protected health information (PHI) as defined under HIPAA — unless a HIPAA Business Associate Agreement is in place with the AI vendor
- Any data classified as confidential or restricted under
[COMPANY NAME]'s data classification policy
Handle with care (minimize before inputting):
- Production configuration files — remove sensitive values before sharing with AI
- Customer support transcripts — remove names and account identifiers before sharing
- Financial projections containing deal-specific data — use anonymized or aggregated versions
- Internal legal communications
The test before sending: Before pasting any document or code into an AI tool, ask: "If this data were exposed in a breach, what would the consequence be?" If the answer is "significant," do not send it.
Section 3: Human Oversight Requirements
AI output requires human review before action in these contexts:
| Use case | Required review level | Reviewer |
|---|---|---|
| Customer-facing communications | Full review and approval | Sender |
| Legal or compliance documents | Review by qualified person before use | [Role] |
| Code merged to production | Standard code review, plus AI-specific security check | Reviewer/approver |
| Financial decisions or projections | Human verification of all numbers | [Role] |
| Hiring or performance decisions | Human decision with AI as input only | [Role] |
| Medical, legal, or safety advice | Not to be provided via AI without professional review | [Role] |
"AI suggested it" is not a defense for errors in any of these categories. The employee who sends the communication, merges the code, or makes the decision is responsible for the output.
Section 4: Prohibited Uses
In addition to the data prohibitions above, the following uses of AI are prohibited regardless of tool or tier:
Always prohibited:
- Using AI to create fake reviews, testimonials, or endorsements attributed to real people who did not provide them
- Using AI to generate or distribute content that discriminates against protected classes in employment, housing, credit, or public accommodation
- Using AI to deceive customers about whether they are interacting with a human or AI system — AI interactions must be disclosed
- Using AI to produce content that violates copyright in ways that create legal liability for
[COMPANY NAME] - Using AI to generate, analyze, or distribute content that is illegal in any jurisdiction where
[COMPANY NAME]operates
[INDUSTRY-SPECIFIC PROHIBITIONS] — add uses prohibited in your context:
Examples for healthcare teams:
- Using AI to generate clinical recommendations that are provided to patients without physician review
- Processing patient PHI in any AI tool without a signed HIPAA BAA from that vendor
Examples for financial services teams:
- Using AI to generate credit scores, risk ratings, or investment recommendations without human review and adverse action notice procedures
- Processing data subject to FCRA in AI tools without appropriate data processing agreements
Examples for legal teams:
- Using AI output as the basis for legal advice to clients without attorney review and verification
- Processing privileged communications in third-party AI tools without client consent
Section 5: Privacy and AI Disclosure
Customer disclosure: If customers interact with an AI system at [COMPANY NAME] — via chatbot, automated email, or AI-assisted support — they must be informed that AI is involved. A clear disclosure at the start of the interaction satisfies this requirement: "You are chatting with an AI assistant. [Optional: A human agent is available if you prefer.]"
Employee disclosure: Employees whose work is evaluated, monitored, or screened using AI tools must be informed. If [COMPANY NAME] uses AI in hiring, performance management, or access control, affected employees must receive notice.
Marketing claims: Any AI capability claim in marketing materials (accuracy rates, bias reduction, speed claims) must have documented evidence from [COMPANY NAME]'s own deployment. Claims sourced from a vendor's documentation do not transfer — your claim requires your evidence.
Section 6: Violation Reporting
If you accidentally send prohibited data to an AI tool:
- Stop immediately — close the session if possible
- Report to
[POLICY OWNER NAME AND TITLE]within 24 hours - If the data included active credentials, rotate them immediately
- Document in the AI incident log: date, tool, what was sent, action taken
Deliberate violations of this policy — knowingly sending prohibited data, using unapproved tools for restricted purposes, or generating prohibited content — are subject to disciplinary action up to and including termination.
Reports are confidential. Retaliation against good-faith reporters is prohibited.
Section 7: Policy Updates
This policy is reviewed annually. When significant new AI tools are deployed or when regulatory requirements change, the policy is updated and all affected employees are notified.
The current approved tools list is maintained separately and updated as needed between full policy reviews.
[COMPANY NAME] | [EFFECTIVE DATE] | Owner: [POLICY OWNER NAME AND TITLE]
How to Customize Section 1 (Approved Tools)
The approved tools list is the part of this policy most likely to need customization. Three steps:
Step 1: Inventory what your team actually uses. Run a survey or check expense reports for AI tool subscriptions. Include tools embedded in other products (Notion AI, Grammarly, HubSpot AI features).
Step 2: Check the DPA status for each tool. Use the AI Vendor DPA Tracker to confirm which tools have Data Processing Agreements available. Any tool used with EU personal data must have a DPA — if it does not, it cannot appear in the approved list without an exception.
Step 3: Assign permitted uses per tool. Some tools are appropriate for all uses; others should be limited. A general-purpose writing assistant is different from an AI tool embedded in your customer support system.
After You Deploy the Policy
Three follow-up actions matter:
-
Brief your team. A policy that sits in a shared drive is not governance. Run a 30-minute walkthrough covering the data prohibitions and the human oversight requirements — those are the two sections most likely to prevent an incident.
-
Create an AI incident log. A simple spreadsheet with date, tool, what happened, and action taken. When the FTC or an auditor asks for evidence of ongoing governance, this is what you produce.
-
Set a review reminder. AI tools and regulations change fast. An annual policy review is the minimum; quarterly is better for teams actively deploying new AI capabilities.
References
- FTC guidance: Aiming for Truth, Fairness, and Equity in Your Company's Use of AI
- EU AI Act Article 13: Transparency obligations for deployers
- EU AI Act Article 28: Deployer obligations
- GDPR Article 28: Data processor requirements
- Related: AI acceptable use policy — detailed implementation guide
- Related: FTC AI enforcement actions April 2026
- Related: AI Vendor DPA Tracker 2026
