AI Policy Desk · Governance

Federal AI Preemption: How Small Teams Should Manage State Law Risk

The White House and Senate are pushing to override state AI laws with a single federal framework. Until that happens, Colorado, California, Texas and a…

Back to blog

Two major federal AI proposals landed in March 2026 within days of each other. The White House released a seven-pillar legislative framework explicitly calling on Congress to override state AI laws deemed to create "undue burdens." Senator Marsha Blackburn released a discussion draft of the TRUMP AMERICA AI Act, which would create a federal duty of care for chatbot developers and require annual third-party audits for high-risk AI systems.

Neither is law. Both matter.

For small teams currently navigating a patchwork of state obligations, the prospect of federal AI preemption sounds like relief. But the transition period — from now until any federal law is signed — introduces its own compliance risk. This article maps what is currently law, what is proposed, and how lean teams should position themselves through the uncertainty.

Key Takeaways

Summary

The US AI regulatory landscape in April 2026 is a live negotiation between federal ambition and state momentum. A dozen states have enacted AI obligations in the past two years; the White House wants Congress to override most of them; and the Senate is circulating its own framework that would add new federal duties even while preempting some state rules. For small teams, the practical answer is the same regardless of how the federal debate resolves: build governance practices that satisfy the most demanding current obligations, document everything, and stay close to the legislative calendar.

What Is Actually Law Right Now

State laws in effect:

If you are operating under EU jurisdiction, the EU AI Act Digital Omnibus deadline extension introduces its own compliance timeline considerations that run in parallel with the US state law picture.

Federal rules in effect:

The White House National AI Policy Framework

Released March 20, 2026, the White House framework is a set of legislative recommendations to Congress — not an executive order, not a regulation. It organizes around seven pillars:

  1. Child safety
  2. Community protection (fraud, deepfakes, election interference)
  3. Intellectual property protection
  4. Free speech (preventing suppression of political viewpoints by AI systems)
  5. Innovation and competitiveness
  6. Workforce readiness
  7. Federal preemption of state AI laws that impose "undue burdens" on interstate commerce

The preemption pillar is the headline item. The Commerce Department was tasked with identifying which state laws should be targeted for challenge — a deadline it missed on March 11. The framework explicitly cautions against "vague liability standards" that create unpredictable compliance exposure for developers.

The framework does not define what makes a state law an "undue burden." That determination would fall to Congress in any resulting legislation, creating significant room for negotiation.

The TRUMP AMERICA AI Act Discussion Draft

Senator Blackburn's discussion draft takes a different approach than the White House framework. Rather than broad preemption, it proposes targeted new federal obligations alongside narrower preemption of specific state provisions. Key elements:

New federal duties:

Preemption scope:

What the draft does not address:

Governance Goals

For a small team navigating this environment, the governance goal is not to optimize for any single regulatory scenario — it is to build practices that are durable across scenarios. A starting point is an AI governance policy template that documents approved tools, permitted data uses, and human oversight requirements. Both the state law regime and the federal proposals share common requirements:

Risks to Watch

Compliance whiplash: If federal preemption passes quickly, teams that built elaborate state-specific compliance structures may need to rebuild around federal standards. Design for portability.

Gap between preemption and new federal standards: If Congress passes a preemption law before establishing clear federal standards, there is a period where state obligations are gone but federal ones are not yet in effect. This gap creates both uncertainty and reduced accountability — not necessarily the outcome small teams should bet on.

Third-party audit requirements: If the TRUMP AMERICA AI Act's annual audit requirement becomes law, the cost and logistics of qualifying third-party auditors would be a real operational constraint for small teams. Begin tracking what "qualified auditor" frameworks look like now (IEEE CertifAIed, ISO 42001 certification, Big Four AI audit practices) so you are not starting from zero.

Copyright liability: The bill's treatment of training data copyright would materially change the risk profile of any AI tool your team uses internally or builds on. If your vendor's model was trained on unlicensed data and the bill passes, the liability exposure chain becomes clearer — and potentially reaches deployers.

Controls: What to Actually Do

This week:

This quarter:

Ongoing:

Checklist (Copy/Paste)

Implementation Steps

  1. Day 1-3: Pull every AI tool used across the organization. Classify each by the type of decision it supports — administrative, customer-facing, operational, consequential.
  2. Week 1: Apply the Colorado AI Act high-risk checklist to each consequential AI system. Flag any that touch covered domains (employment, education, housing, financial services, healthcare).
  3. Week 2: For flagged systems, draft a risk assessment using NIST AI RMF's MAP function as a template. Document what could go wrong, how likely it is, and what you are doing about it.
  4. Week 3: Verify vendor contracts include the documentation and indemnification terms you need. Escalate gaps to legal or procurement.
  5. Month 2: Implement human review mechanisms for any high-risk AI decisions that lack them. Define what "meaningful human review" means operationally — it requires real authority to override, not rubber-stamping.
  6. Ongoing: Subscribe to a reliable AI regulatory tracker (IAPP, Future of Privacy Forum, state attorney general newsletters) and review weekly.

Frequently Asked Questions

Q: Should we comply with the Colorado AI Act if we are a small startup? A: If you deploy AI systems that make or materially contribute to consequential decisions affecting Colorado residents, the Colorado AI Act applies regardless of your company size. The statute does not include a small business exemption. That said, enforcement risk scales with materiality and impact — a documented, good-faith compliance effort substantially reduces regulatory exposure.

Q: What counts as a "high-risk AI system" under the TRUMP AMERICA AI Act? A: The discussion draft has not published its final definition. Most current proposals follow the Colorado/EU approach: AI systems making or materially influencing consequential decisions in domains including employment, education, housing, financial services, healthcare, and access to essential services.

Q: Can we be liable for an AI tool we did not build? A: Yes. As a deployer, you take on obligations under both state laws and the proposed federal framework. Using a vendor's AI tool does not transfer liability; it means you share it. Vendor due diligence, contractual protections, and monitoring the AI in your deployment context are all deployer responsibilities.

Q: How quickly could a federal AI law actually pass? A: Historical precedent suggests comprehensive technology regulation takes 2-4 years from first major legislative draft to enactment. The TRUMP AMERICA AI Act is a discussion draft. A floor vote before the 2026 midterms would be unusually fast. Realistic estimate for a signed federal AI law: 2027 at the earliest, more likely 2028.

Q: What if we operate only in states with no AI law? A: You still face federal obligations (FTC Act, EEOC, SEC if applicable), contractual risk with customers who are in regulated states, and growing reputational expectations. The safest approach is to build governance practices that would satisfy the Colorado AI Act as a baseline — it is the most operationally detailed state law currently in force.

References

  1. White House National Policy Framework for Artificial Intelligence (Holland & Knight analysis, March 2026): https://www.hklaw.com/en/insights/publications/2026/03/white-house-releases-a-national-policy-framework-for-artificial
  2. TRUMP AMERICA AI Act discussion draft analysis (Latham & Watkins): https://www.lw.com/en/insights/trump-administration-takes-major-steps-toward-comprehensive-federal-ai-regulation
  3. How the Federal AI Regulation Push Could Impact Your Business (KJK Law, April 1, 2026): https://kjk.com/2026/04/01/how-the-federal-ai-regulation-push-could-impact-your-business/
  4. NIST AI Risk Management Framework 1.0: https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf
  5. Colorado AI Act — SB 24-205 full text: https://leg.colorado.gov/bills/sb24-205