The FTC does not have a dedicated AI enforcement law. It doesn't need one. Section 5 of the FTC Act — which prohibits unfair or deceptive acts or practices — is broad enough to reach most AI-related violations the FTC is currently pursuing.
Small teams often assume FTC enforcement is for Big Tech. It isn't. The FTC has cited companies with under 50 employees. If your company uses AI to make claims about products, automate decisions that affect consumers, or generate content that influences purchasing, you have FTC exposure.
Here is what the FTC is actually enforcing and what reduces your risk.
The Three Active FTC Enforcement Categories
1. Unsubstantiated AI Performance Claims (AI Washing)
The FTC treats unsubstantiated AI claims the same as unsubstantiated claims about any other product feature: a deceptive trade practice under Section 5.
What counts as an AI washing violation:
- Claiming your AI has a specific accuracy rate (e.g., "98% accurate") without documented, reproducible testing methodology
- Claiming AI "eliminates bias" or "removes human error" without bias testing on your specific deployment
- Marketing a product as "AI-powered" or "machine learning-driven" when it uses simple rule-based logic or lookup tables
- Claiming AI capabilities to investors or customers that the product does not yet have
The FTC's 2023 report on AI explicitly flagged AI washing as an enforcement priority. The April 2026 enforcement sweep included consent orders that required companies to: cease specific capability claims, conduct and publish accuracy testing, and submit to FTC monitoring for three years.
Controls that reduce exposure:
- Document the testing basis for every AI performance claim in your marketing
- Review AI claims in product pages, sales materials, investor decks, and job postings
- Get sign-off from a technical owner before any AI performance claim goes public
- Use hedged language: "in internal testing" or "on our benchmark dataset" rather than absolute claims
2. Undisclosed Automated Decision-Making
The FTC's position is that using AI or automated systems to make consequential decisions about consumers — without disclosure — may constitute an unfair or deceptive practice.
Consequential decisions include:
- Hiring and employment screening
- Credit and loan decisions
- Pricing that varies by individual (personalized pricing)
- Content moderation that affects access to services
- Insurance underwriting
- Tenant screening
Several federal laws add specific disclosure requirements on top of the FTC's general authority:
- ECOA / FCRA: Algorithmic credit decisions require adverse action notices explaining the specific factors used
- Fair Housing Act: Algorithmic tenant screening is subject to fair housing requirements
- EEOC guidance: AI-assisted hiring may create disparate impact liability
Controls that reduce exposure:
- Audit every automated decision in your product or operations that affects consumer rights, employment, credit, or access to services
- Add explicit disclosure when automated systems are used in consequential decisions
- For credit or employment decisions, confirm your adverse action notice process covers algorithmic decisions
- Keep a decision logic log: what system made the decision, what inputs it used, when
3. AI-Generated Fake Reviews and Fake Social Proof
The FTC's Endorsement and Testimonial Guides were updated in 2023 specifically to address AI-generated reviews. Using AI to generate fake reviews, fabricate endorsements, or produce synthetic testimonials is a deceptive trade practice.
Violations include:
- Using AI to generate customer reviews posted under fake accounts
- Using AI to write testimonials attributed to real customers without their knowledge
- Using AI-generated social proof (star ratings, review summaries) that misrepresents actual customer sentiment
- Paying for AI-generated review content posted on third-party platforms
Controls that reduce exposure:
- Audit your review and testimonial collection process for any AI-generated content
- If you use AI to help draft review responses, do not use it to generate the reviews themselves
- Review any third-party tools in your stack that touch customer reviews for synthetic content risk
What Triggers an FTC Investigation
FTC AI enforcement cases have been initiated through:
- Consumer complaints filed with the FTC or state AGs
- Competitor complaints — particularly for AI washing in B2B sales
- Press coverage of AI-related failures or misleading marketing
- Regulatory referrals from the SEC, CFPB, or state regulators
- FTC's own market sweeps — the FTC periodically sweeps specific sectors
Small teams are not invisible to this process. Consumer complaints reach the FTC regardless of company size. Competitor complaints are common in B2B AI markets where vendors monitor each other's claims.
The Five Controls That Reduce FTC Exposure
| Control | What It Covers |
|---|---|
| AI claims documentation | Documented testing basis for every AI performance claim |
| Automated decision audit | Inventory of AI systems making consequential decisions, with disclosures |
| Adverse action process | Verified that algorithmic credit/employment decisions have proper notices |
| Review authenticity policy | Written policy prohibiting AI-generated reviews; staff training |
| Privacy notice AI audit | Privacy policy reflects actual AI data use (training, processing, retention) |
None of these require legal staff to implement. They require assigning an owner, doing the work, and documenting it. The documentation is what matters in an FTC investigation — the FTC requests records, and companies without records look worse.
Want to assess your FTC exposure? The AI Risk Assessment includes FTC-relevant risk factors across marketing claims, automated decisions, and data use. The Policy Generator creates an AI acceptable use policy that covers the disclosure and documentation requirements above.
