Loading…
Loading…
AI Regulation Reference
28 laws tracked across the EU, US federal government, US states, UK, and international bodies — filtered and explained for small teams.
Showing 28 of 28 laws
The world's first comprehensive AI law, classifying AI systems by risk level and imposing obligations that scale with risk. High-risk AI systems must undergo conformity assessment, maintain documentation, and implement human oversight. Full enforcement for high-risk systems begins August 2026.
Effective: August 1, 2024
The EU's foundational data privacy law, governing how organizations collect, process, and store personal data of EU residents. Article 22 restricts solely automated decision-making that significantly affects individuals, with direct implications for AI-driven decisions.
Effective: May 25, 2018
The updated EU product liability framework extends liability to software and AI, allowing consumers to seek compensation when defective AI systems cause harm. Removes the cap on damages and introduces a rebuttable presumption of defectiveness in certain cases.
Effective: December 9, 2026
A voluntary framework from the National Institute of Standards and Technology helping organizations manage AI risks. Organized around four functions: Govern, Map, Measure, and Manage. Widely adopted as the de facto AI governance standard in the US and referenced in multiple state AI laws.
Effective: January 26, 2023
Revoked the Biden AI Executive Order (EO 14110) and directed federal agencies to prioritize AI development over safety regulation, emphasizing US competitiveness. Directed development of a new national AI action plan focused on removing regulatory barriers.
Effective: January 23, 2025
A non-binding policy paper from the White House outlining legislative recommendations for Congress on federal AI governance. Covers AI competitiveness, safety standards, workforce displacement, and international cooperation. Does not create legal obligations.
The first comprehensive US state AI law. Requires developers and deployers of 'high-risk AI systems' used to make consequential decisions about Colorado consumers to perform risk assessments, implement impact assessments, and disclose AI use. Covers decisions affecting employment, credit, education, healthcare, housing, and insurance.
Effective: February 1, 2026
Requires developers of AI systems trained on datasets of 1 million or more records and offered to California users to publish documentation on their training datasets, including data sources, categories of data, and steps taken to filter harmful content.
Effective: January 1, 2026
Requires developers of large AI models (those trained using more than 10²⁶ FLOPS) to implement safety testing, whistleblower protections for employees who report safety concerns, and incident reporting for safety-critical failures. Narrower in scope than the vetoed SB 1047.
Effective: January 1, 2026
Would have required developers of large AI models (over $100M in training costs) to implement extensive safety testing, 'kill switch' capabilities, and third-party audits. Vetoed by Governor Newsom in September 2024 on grounds that the bill was too broad and could inhibit AI development.
Requires employers and employment agencies in New York City to conduct independent bias audits of AI tools used in hiring decisions, publish audit results, and notify candidates that automated decision tools are being used.
Effective: July 5, 2023
Requires employers using AI to analyze video interviews to notify applicants, explain how the AI works and what characteristics it evaluates, obtain consent, limit who can access the videos, and destroy the videos within 30 days of the applicant's request.
Effective: January 1, 2020
Signed into law in June 2025, the NY RAISE Act requires developers of advanced AI systems to conduct safety evaluations, implement safeguards against critical harms, establish whistleblower protections, and report to the state on safety practices. New York was the first US state to enact frontier AI safety legislation.
Pending Texas legislation modeled on the Colorado AI Act, requiring developers and deployers of high-risk AI systems used for consequential decisions about Texas consumers to implement risk management programs and transparency requirements. Expected effective date in mid-2026 if passed.
Pending Virginia legislation that would require impact assessments and transparency measures for high-risk AI systems used in consequential decisions affecting Virginia consumers. Similar in structure to the Colorado AI Act.
The UK's deliberate choice not to pass comprehensive AI legislation, instead directing existing regulators (ICO, CMA, FCA, etc.) to apply their existing powers to AI within their sectors. Prioritizes 'pro-innovation' flexibility over prescriptive rules, in explicit contrast to the EU AI Act approach.
Effective: March 29, 2023
China's regulation governing generative AI services provided to users in China. Requires AI-generated content to be labeled, prohibits content that violates socialist core values or undermines state authority, and mandates security assessments before launch. Providers must verify user identities and maintain logs of generated content.
Effective: August 15, 2023
The international standard for AI management systems, published by ISO and IEC. Provides a framework for organizations to establish, implement, maintain, and continuously improve their AI governance practices. Certification to ISO 42001 is emerging as a signal of AI governance maturity in enterprise procurement.
Effective: December 18, 2023
Proposed Canadian legislation that would regulate high-impact AI systems, requiring risk assessments, mitigation measures, and incident reporting. Part of Bill C-27, Canada's broader digital governance reform package. Status uncertain following the 2024 Canadian federal election and change of government.
The Federal Trade Commission applies existing consumer protection law — the FTC Act's prohibition on unfair or deceptive acts — to AI products and services. Operation AI Comply (September 2024) brought simultaneous actions against five companies for deceptive AI product claims, fake AI-generated reviews, and AI-enabled fraud. The FTC has made clear that 'AI washing' — overstating AI capabilities — is an enforcement priority.
Effective: January 1, 2023
The Equal Employment Opportunity Commission (EEOC) has clarified that Title VII of the Civil Rights Act applies to AI-assisted hiring, promotion, and performance management tools. The 2023 technical assistance document explains that employers can face disparate impact liability if an AI tool screens out protected groups at higher rates than others — even if the employer did not design the tool and did not intend to discriminate.
Effective: May 18, 2023
The Consumer Financial Protection Bureau (CFPB) has issued guidance clarifying that lenders must provide specific, accurate reasons when taking adverse action (denial, unfavorable terms) in credit decisions — even when the decision was made by an AI or machine learning model. 'The model said no' is not a legally sufficient adverse action notice under ECOA and Regulation B.
Effective: May 26, 2022
The FDA regulates AI and machine learning software that meets the definition of a medical device (Software as a Medical Device, or SaMD). The FDA's AI/ML action plan and associated guidance documents establish requirements for pre-market review, post-market monitoring, and predetermined change control plans — a framework that allows AI models to learn and adapt within predefined boundaries without requiring a new 510(k) each time.
Effective: January 13, 2021
The Securities and Exchange Commission has signaled through enforcement actions and staff guidance that public companies must accurately disclose material AI risks and AI capabilities in their filings. 'AI washing' — overstating AI capabilities in investor materials or using AI-related language to boost stock price without substance — is treated as a potential securities fraud issue. The SEC's Division of Corporation Finance has issued guidance on how existing disclosure requirements apply to AI.
Effective: January 1, 2024
Voluntary commitments by leading AI companies (OpenAI, Anthropic, Google, Microsoft, Meta, and others) to the EU and US governments, covering safety testing before model release, transparency about capabilities, red-teaming, and sharing safety information with governments. Not legally binding.
Effective: September 28, 2024
Canada's federal private-sector privacy law, governing how organizations collect, use, and disclose personal information in the course of commercial activity. The Office of the Privacy Commissioner has issued guidance on how PIPEDA applies to AI, including automated decision-making and AI training data. Quebec's Law 25 (in force since 2023) is significantly stricter and acts as the provincial overlay for Quebec residents.
Effective: January 1, 2001
Australia's Department of Industry, Science and Resources published a Voluntary AI Safety Standard in October 2024, outlining 10 guardrails for organizations deploying AI in high-risk contexts. The guardrails cover governance, accountability, transparency, fairness, and safety. While voluntary, the standard is referenced in Australian government AI procurement requirements and signals the direction of future mandatory regulation.
Effective: October 1, 2024
Published by Singapore's Personal Data Protection Commission (PDPC), the Model AI Governance Framework provides detailed, practical guidance for organizations deploying AI. Organized around four pillars: internal governance structures, AI decision-making types, operations management, and stakeholder interaction. Singapore's FEAT principles (Fairness, Ethics, Accountability, Transparency) underpin financial services AI regulation from the Monetary Authority of Singapore.
Effective: January 21, 2020
Status guide
Unfamiliar terms?
Conformity assessment, GPAI model, high-risk AI — all defined in plain English.
Browse the glossary →