Loading…
Loading…
AI Regulation Reference
28 laws tracked across the EU, US federal government, US states, UK, and international bodies — filtered and explained for small teams.
Real-world enforcement actions — regulators worldwide are using existing and new powers against AI misuse. These cases show what violations look like in practice and what the consequences are.
Showing 7 of 7 cases · sorted by most recent
The newly established EU AI Office launched its first formal inquiry into GPAI model providers under Article 51 of the EU AI Act. The probe examines whether frontier model providers are complying with the Act's transparency and documentation requirements for general-purpose AI models, including copyright compliance summaries and technical documentation. This marks the first enforcement action under the EU AI Act and signals how the AI Office will interpret provider obligations.
Outcome: Ongoing investigation as of Q1 2026. No penalties imposed yet. The inquiry signals that the EU AI Office is actively monitoring GPAI model providers and is prepared to use its Article 101 powers (fines up to 3% of global turnover) for non-compliance with GPAI obligations.
The FTC's 'Operation AI Comply' brought simultaneous enforcement actions against five companies for deceptive AI claims. DoNotPay ($193K settlement) falsely claimed its AI was a 'robot lawyer'; Ascend Ecom charged consumers for AI-powered passive income businesses that did not deliver; Rytr ($50K settlement) sold a service capable of generating fake reviews at scale; NGL Labs collected children's data and used AI to send fake messages; Omni AI made false income claims about AI tools.
Outcome: Total of $2.5M+ in civil penalties and settlements across five cases. Orders prohibit deceptive marketing of AI capabilities and require clear disclosure of AI limitations.
DoNotPay marketed itself as 'the world's first robot lawyer,' claiming its AI could help consumers fight corporations, protect privacy, and handle legal matters as well as a human attorney. The FTC found these claims were not substantiated — the AI had not been tested against human lawyers, and the company lacked evidence that the AI could perform the legal tasks it claimed.
Outcome: $193,000 settlement. DoNotPay is prohibited from making claims about AI legal capabilities without competent and reliable evidence substantiating the claim, and must provide refunds to subscribers who signed up based on the AI's legal capability marketing.
Rite Aid deployed facial recognition AI in hundreds of stores to flag suspected shoplifters. The system disproportionately misidentified people of color, women, and younger individuals as threats — causing them to be wrongly accused, followed, and publicly embarrassed in stores. Rite Aid failed to ensure the AI system was accurate and did not take reasonable steps to prevent misidentification harm.
Outcome: FTC banned Rite Aid from using AI facial recognition in retail settings for 5 years. Company required to delete all facial images collected, develop a comprehensive AI governance program, and implement meaningful accuracy testing before using any AI surveillance tool.
Amazon retained children's voice recordings collected by Alexa indefinitely — even after parents requested deletion — in violation of the Children's Online Privacy Protection Act (COPPA). Amazon used the retained data to improve Alexa's AI models despite being told to delete it. A separate violation related to Ring doorbell cameras allowed employees and contractors to access private customer video footage.
Outcome: $25M civil penalty for the Alexa COPPA violations; $5.8M in disgorgement for the Ring privacy violations. Amazon was required to delete all children's data collected in violation of COPPA, prohibited from using that data for training AI, and required to implement a comprehensive children's data deletion program.
Italy's data protection authority temporarily banned ChatGPT from processing Italian users' data, citing GDPR violations: no lawful basis for mass collection of training data, no age verification to prevent minors from accessing the service, and failure to provide adequate transparency about data collection. OpenAI had 20 days to comply or face a permanent ban.
Outcome: ChatGPT was blocked for Italian users for approximately one month (March 31 – April 28, 2023). OpenAI resolved the ban by implementing an age verification mechanism, adding a GDPR privacy notice, and providing an opt-out mechanism for Italian users' data. The Garante later opened a separate formal investigation.
Clearview AI scraped billions of photos from the internet to build a facial recognition database sold to law enforcement. The UK ICO and French CNIL both found this violated data protection law: no lawful basis for collecting biometric data at scale, individuals had no knowledge their images were being used, and Clearview failed to respond adequately to data subject access requests.
Outcome: ICO fined Clearview £7.5M and ordered deletion of UK residents' data. CNIL fined Clearview €20M. Italy, Australia, Canada, and Greece also took enforcement action. Clearview was effectively banned from operating in Europe.
Status guide
Unfamiliar terms?
Conformity assessment, GPAI model, high-risk AI — all defined in plain English.
Browse the glossary →