Federal AI Preemption vs State Laws 2026 — What Actually Applies to Your Business
Short answer: state laws apply. As of May 2026, no federal statute preempts state AI regulations. Build your compliance program on state law requirements now — do not wait for federal preemption that has been rejected twice by Congress.
Where the preemption debate stands
| Event | Date | Effect |
|---|---|---|
| Trump executive order on AI policy | December 11, 2025 | Directed development of federal framework — no legal preemption |
| White House National Policy Framework | March 20, 2026 | Legislative recommendation to Congress — not law |
| One Big Beautiful Bill Act — preemption rejected | 2026 | Congress declined to include AI preemption |
| NDAA — preemption rejected | 2026 | Second Congressional rejection |
| Status as of May 2026 | — | No federal AI preemption law enacted |
The White House framework explicitly calls for preemption of state AI laws that impose "undue burdens" on AI development. But frameworks are recommendations — only Congress can preempt state law, and Congress has declined to do so twice.
State AI laws currently in force (May 2026)
| State | Law | Status | Key obligation |
|---|---|---|---|
| Texas | TRAIGA | In force (Jan 1, 2026) | Developer/deployer docs, impact assessments, prohibited uses |
| Illinois | AI Video Interview Act | In force (since 2020) | Disclose AI in hiring video interviews; notify candidates |
| Maryland | Algorithmic pricing ban | Signed 2026 | Prohibits certain AI pricing coordination practices |
| Washington | AI likeness law | In force June 10, 2026 | Consent required for AI voice/likeness of real people |
| Connecticut | SB 5 (AIDA) | Pending governor signature | High-risk AI deployer obligations, bias audits |
| Colorado | SB 205 | Enforcement suspended | Pending SB 189 revisions — check final outcome |
| California | AI Transparency Act | In force | Disclosure requirements for AI-generated political content |
What the White House framework would do (if enacted)
The March 20, 2026 framework outlined six broad objectives Congress should address:
- Federal preemption of state AI laws deemed unduly burdensome
- Unified liability standard — avoid fragmented state tort liability for AI
- Children's protection — preserve state authority over child safety regardless of preemption
- Consumer fraud — preserve state enforcement of existing consumer protection laws
- Procurement — states retain control over their own AI procurement rules
- National security — federal carve-out from any state oversight
Even if enacted, this framework explicitly preserves significant state power. Laws addressing child safety, consumer fraud, anti-discrimination, and government procurement would survive preemption in most scenarios.
Why preemption is unlikely to protect you
Reason 1 — Congress has rejected it twice. Bipartisan skepticism of "sweeping federal overrides" of state regulation has blocked preemption in the NDAA and reconciliation bill. There is no legislative vehicle currently moving that includes broad AI preemption.
Reason 2 — Partial preemption is the likely outcome. If Congress does act, the final bill will almost certainly carve out children's safety, hiring discrimination, consumer protection, and healthcare — the areas where most state AI laws focus. You will not escape state obligations on these topics regardless.
Reason 3 — Timing. Legislative cycles move slowly. State laws are in force today. A compliance gap that opens a regulatory exposure now is not fixed by a law that might pass in 2027.
What to build now
A compliance program built on state law requirements will survive any federal preemption scenario:
- Inventory your AI systems against each applicable state — where do your users live?
- Map obligations by state: Texas TRAIGA (developers + deployers), Connecticut AIDA (high-risk AI deployers), Washington (likeness/voice)
- Write a single AI acceptable use policy that covers the union of all applicable state rules — this is usually achievable with one document
- Run impact assessments for any AI touching employment, credit, housing, or healthcare — every state law requires this in some form
- Add disclosures to AI-facing interfaces — all state laws require some form of disclosure
The patchwork is real — but manageable
Seventeen states have passed AI legislation, with more expected by end of 2026. The laws share a common core:
- Prohibit manipulation, deepfakes without consent, CSAM
- Require disclosure when users interact with AI
- Require impact assessments for high-stakes AI
- Give AGs enforcement authority with an opportunity to cure
A single compliance program that covers this core satisfies the majority of current state requirements. Build that once, then layer state-specific additions (Texas's NIST safe harbor, Connecticut's 60-day risk assessment timeline, etc.) on top.
Related reading
- Texas TRAIGA compliance checklist 2026
- Connecticut AI law 2026
- Maryland AI pricing law 2026
- Washington AI likeness law June 2026
- Colorado AI Act SB 189 rewrite 2026
References
- White House — National Policy Framework for Artificial Intelligence (March 20, 2026)
- Morgan Lewis — White House AI Framework Puts Federal Preemption at the Center of the Debate
- Roll Call — White House AI framework calls for preemption of state laws
- Cooley — White House Releases AI Regulatory Blueprint
