Loading…
Loading…
AI Regulation Reference
28 laws tracked across the EU, US federal government, US states, UK, and international bodies — filtered and explained for small teams.
Showing 8 of 28 laws
A voluntary framework from the National Institute of Standards and Technology helping organizations manage AI risks. Organized around four functions: Govern, Map, Measure, and Manage. Widely adopted as the de facto AI governance standard in the US and referenced in multiple state AI laws.
Effective: January 26, 2023
Revoked the Biden AI Executive Order (EO 14110) and directed federal agencies to prioritize AI development over safety regulation, emphasizing US competitiveness. Directed development of a new national AI action plan focused on removing regulatory barriers.
Effective: January 23, 2025
A non-binding policy paper from the White House outlining legislative recommendations for Congress on federal AI governance. Covers AI competitiveness, safety standards, workforce displacement, and international cooperation. Does not create legal obligations.
The Federal Trade Commission applies existing consumer protection law — the FTC Act's prohibition on unfair or deceptive acts — to AI products and services. Operation AI Comply (September 2024) brought simultaneous actions against five companies for deceptive AI product claims, fake AI-generated reviews, and AI-enabled fraud. The FTC has made clear that 'AI washing' — overstating AI capabilities — is an enforcement priority.
Effective: January 1, 2023
The Equal Employment Opportunity Commission (EEOC) has clarified that Title VII of the Civil Rights Act applies to AI-assisted hiring, promotion, and performance management tools. The 2023 technical assistance document explains that employers can face disparate impact liability if an AI tool screens out protected groups at higher rates than others — even if the employer did not design the tool and did not intend to discriminate.
Effective: May 18, 2023
The Consumer Financial Protection Bureau (CFPB) has issued guidance clarifying that lenders must provide specific, accurate reasons when taking adverse action (denial, unfavorable terms) in credit decisions — even when the decision was made by an AI or machine learning model. 'The model said no' is not a legally sufficient adverse action notice under ECOA and Regulation B.
Effective: May 26, 2022
The FDA regulates AI and machine learning software that meets the definition of a medical device (Software as a Medical Device, or SaMD). The FDA's AI/ML action plan and associated guidance documents establish requirements for pre-market review, post-market monitoring, and predetermined change control plans — a framework that allows AI models to learn and adapt within predefined boundaries without requiring a new 510(k) each time.
Effective: January 13, 2021
The Securities and Exchange Commission has signaled through enforcement actions and staff guidance that public companies must accurately disclose material AI risks and AI capabilities in their filings. 'AI washing' — overstating AI capabilities in investor materials or using AI-related language to boost stock price without substance — is treated as a potential securities fraud issue. The SEC's Division of Corporation Finance has issued guidance on how existing disclosure requirements apply to AI.
Effective: January 1, 2024
Status guide
Unfamiliar terms?
Conformity assessment, GPAI model, high-risk AI — all defined in plain English.
Browse the glossary →