Loading…
Loading…
The Federal Trade Commission applies existing consumer protection law — the FTC Act's prohibition on unfair or deceptive acts — to AI products and services. Operation AI Comply (September 2024) brought simultaneous actions against five companies for deceptive AI product claims, fake AI-generated reviews, and AI-enabled fraud. The FTC has made clear that 'AI washing' — overstating AI capabilities — is an enforcement priority.
If you market AI capabilities to customers, the FTC's existing authority applies to any claims you make about your AI. Claiming your AI is 'bias-free', '100% accurate', or 'fully autonomous' when it is not is deceptive advertising. Your AI marketing copy, product pages, and sales materials should accurately reflect what your AI can and cannot do. The FTC has specifically warned against using AI to generate fake reviews or testimonials — a practice it treats as per se deceptive.
Up to $51,744 per violation per day for knowing violations of FTC orders; significant civil penalties in enforcement actions
The FTC's 'Operation AI Comply' brought simultaneous enforcement actions against five companies for deceptive AI claims. DoNotPay ($193K settlement) falsely claimed its AI was a 'robot lawyer'; Ascend Ecom charged consumers for AI-powered passive income businesses that did not deliver; Rytr ($50K settlement) sold a service capable of generating fake reviews at scale; NGL Labs collected children's data and used AI to send fake messages; Omni AI made false income claims about AI tools.
Outcome: Total of $2.5M+ in civil penalties and settlements across five cases. Orders prohibit deceptive marketing of AI capabilities and require clear disclosure of AI limitations.
Rite Aid deployed facial recognition AI in hundreds of stores to flag suspected shoplifters. The system disproportionately misidentified people of color, women, and younger individuals as threats — causing them to be wrongly accused, followed, and publicly embarrassed in stores. Rite Aid failed to ensure the AI system was accurate and did not take reasonable steps to prevent misidentification harm.
Outcome: FTC banned Rite Aid from using AI facial recognition in retail settings for 5 years. Company required to delete all facial images collected, develop a comprehensive AI governance program, and implement meaningful accuracy testing before using any AI surveillance tool.
DoNotPay marketed itself as 'the world's first robot lawyer,' claiming its AI could help consumers fight corporations, protect privacy, and handle legal matters as well as a human attorney. The FTC found these claims were not substantiated — the AI had not been tested against human lawyers, and the company lacked evidence that the AI could perform the legal tasks it claimed.
Outcome: $193,000 settlement. DoNotPay is prohibited from making claims about AI legal capabilities without competent and reliable evidence substantiating the claim, and must provide refunds to subscribers who signed up based on the AI's legal capability marketing.