Two major federal AI proposals landed in March 2026 within days of each other. The White House released a seven-pillar legislative framework explicitly calling on Congress to override state AI laws deemed to create "undue burdens." Senator Marsha Blackburn released a discussion draft of the TRUMP AMERICA AI Act, which would create a federal duty of care for chatbot developers and require annual third-party audits for high-risk AI systems.
Neither is law. Both matter.
For small teams currently navigating a patchwork of state obligations, the prospect of federal AI preemption sounds like relief. But the transition period — from now until any federal law is signed — introduces its own compliance risk. This article maps what is currently law, what is proposed, and how lean teams should position themselves through the uncertainty.
Key Takeaways
- No federal AI law has passed. State obligations — Colorado, California, Texas, Connecticut, and others — remain fully in force today.
- The White House framework and TRUMP AMERICA AI Act are proposals, not law. Do not pause state compliance work based on them.
- The TRUMP AMERICA AI Act proposes third-party bias audits for high-risk AI systems and a chatbot developer duty of care — requirements that would be new federal obligations, not a relaxation.
- Build your compliance programme around durable practices: risk assessment, human oversight, documentation, and incident response. These appear in every proposal and are future-proof.
- Monitor the legislative calendar: any floor vote before Congress's August recess would be unusually fast for a bill of this complexity.
Summary
The US AI regulatory landscape in April 2026 is a live negotiation between federal ambition and state momentum. A dozen states have enacted AI obligations in the past two years; the White House wants Congress to override most of them; and the Senate is circulating its own framework that would add new federal duties even while preempting some state rules. For small teams, the practical answer is the same regardless of how the federal debate resolves: build governance practices that satisfy the most demanding current obligations, document everything, and stay close to the legislative calendar.
What Is Actually Law Right Now
State laws in effect:
- Colorado AI Act (SB 24-205) — effective February 1, 2026. Applies to developers and deployers of high-risk AI systems affecting Colorado residents. Requires risk assessments, impact disclosures, human appeal mechanisms, and annual conformance statements.
- California AI Transparency Act (AB 2013) — effective January 1, 2026. Requires developers of AI systems trained on 1M+ data points to publish training data summaries and content provenance information.
- Texas Responsible AI Governance Act — effective January 2026. Requires developers and deployers of high-risk AI to conduct impact assessments and provide transparency disclosures.
- Connecticut SB 2 — requires notice and human override mechanisms for AI decisions affecting housing, employment, credit, and healthcare.
- New York RAISE Act — targets frontier AI developers (over $500M revenue) with 72-hour incident reporting, safety protocol publication, and registration obligations.
If you are operating under EU jurisdiction, the EU AI Act Digital Omnibus deadline extension introduces its own compliance timeline considerations that run in parallel with the US state law picture.
Federal rules in effect:
- SEC FY2026 examination priorities — not a regulation, but active examination guidance requiring financial services firms to document AI governance, supervisory procedures, and AI-related investment recommendations.
- EEOC AI guidance — existing Title VII obligations apply to AI-assisted hiring tools; employers cannot shelter behind vendor use.
- FTC Act Section 5 — unfair or deceptive AI practices are an active enforcement risk; the FTC has brought cases against AI companies using misleading capability claims.
The White House National AI Policy Framework
Released March 20, 2026, the White House framework is a set of legislative recommendations to Congress — not an executive order, not a regulation. It organizes around seven pillars:
- Child safety
- Community protection (fraud, deepfakes, election interference)
- Intellectual property protection
- Free speech (preventing suppression of political viewpoints by AI systems)
- Innovation and competitiveness
- Workforce readiness
- Federal preemption of state AI laws that impose "undue burdens" on interstate commerce
The preemption pillar is the headline item. The Commerce Department was tasked with identifying which state laws should be targeted for challenge — a deadline it missed on March 11. The framework explicitly cautions against "vague liability standards" that create unpredictable compliance exposure for developers.
The framework does not define what makes a state law an "undue burden." That determination would fall to Congress in any resulting legislation, creating significant room for negotiation.
The TRUMP AMERICA AI Act Discussion Draft
Senator Blackburn's discussion draft takes a different approach than the White House framework. Rather than broad preemption, it proposes targeted new federal obligations alongside narrower preemption of specific state provisions. Key elements:
New federal duties:
- Duty of care for chatbot developers — developers of AI chatbots would owe a duty of care to users, particularly around deceptive practices, addiction by design, and harmful outputs to vulnerable users
- Annual third-party bias audits — high-risk AI systems would require annual audits by qualified independent parties, with results summarized in public reports
- Copyright clarification — unauthorized use of copyrighted works for AI training declared not to qualify as fair use under the bill's text
Preemption scope:
- The bill does not broadly preempt generally applicable state laws (unlike the White House framework)
- Individual titles contain targeted preemption: the chatbot duty-of-care title preempts conflicting state chatbot regulations; the copyright title preempts conflicting state AI-copyright provisions
What the draft does not address:
- Employment discrimination via AI hiring tools
- Financial services AI oversight (defers to SEC/CFPB)
- Healthcare AI (defers to FDA)
- Public sector AI use
Governance Goals
For a small team navigating this environment, the governance goal is not to optimize for any single regulatory scenario — it is to build practices that are durable across scenarios. A starting point is an AI governance policy template that documents approved tools, permitted data uses, and human oversight requirements. Both the state law regime and the federal proposals share common requirements:
- Risk assessment before deployment — every major proposal requires some form of risk or impact assessment for AI systems that make consequential decisions
- Human oversight mechanisms — Colorado, Connecticut, and the White House framework all require meaningful human review or override capability for high-stakes AI decisions
- Documentation — without documentation of your risk assessment, oversight procedures, and incident handling, you cannot demonstrate compliance to any regulator
- Transparency to affected individuals — every enacted state law and every federal proposal requires disclosure when AI is making or materially influencing a consequential decision
Risks to Watch
Compliance whiplash: If federal preemption passes quickly, teams that built elaborate state-specific compliance structures may need to rebuild around federal standards. Design for portability.
Gap between preemption and new federal standards: If Congress passes a preemption law before establishing clear federal standards, there is a period where state obligations are gone but federal ones are not yet in effect. This gap creates both uncertainty and reduced accountability — not necessarily the outcome small teams should bet on.
Third-party audit requirements: If the TRUMP AMERICA AI Act's annual audit requirement becomes law, the cost and logistics of qualifying third-party auditors would be a real operational constraint for small teams. Begin tracking what "qualified auditor" frameworks look like now (IEEE CertifAIed, ISO 42001 certification, Big Four AI audit practices) so you are not starting from zero.
Copyright liability: The bill's treatment of training data copyright would materially change the risk profile of any AI tool your team uses internally or builds on. If your vendor's model was trained on unlicensed data and the bill passes, the liability exposure chain becomes clearer — and potentially reaches deployers.
Controls: What to Actually Do
This week:
- Audit which state AI laws apply to your organization based on where you operate and where your customers are located. Colorado and California obligations are the most broadly scoped.
- Check whether any of your AI systems make consequential decisions about employment, credit, housing, or essential services — these are the high-risk categories in every enacted state law.
- Review your vendor contracts for AI tools: do they include indemnification for regulatory compliance failures? Do they provide documentation you would need for a state audit? An AI vendor security questions checklist covers the key questions to ask third-party AI providers.
This quarter:
- Complete a risk assessment for any AI system that affects individual outcomes. Document methodology, findings, and mitigation decisions.
- Implement a human review or override pathway for any high-risk AI decision. This does not need to be elaborate — it needs to be real and documented.
- Draft a simple AI use policy covering what tools are approved, what data is allowed, and what decisions require human review.
- Assign a person to track federal AI legislative progress. A single paragraph summary reviewed monthly is sufficient.
Ongoing:
- Do not pause state compliance work pending federal preemption — the transition timeline is unpredictable and enforcement risk under state laws is real today.
- Design your documentation practices to be audit-ready under any framework: risk assessments, oversight procedures, incident logs, and disclosure records.
Checklist (Copy/Paste)
- Map which state AI laws apply based on operational footprint and customer base
- Identify all AI systems that make consequential decisions (employment, credit, housing, services)
- Complete risk assessments for high-risk AI systems; document methodology and findings
- Implement human review or override mechanisms for high-stakes AI decisions
- Review vendor contracts for AI governance, indemnification, and documentation provisions
- Draft or update AI use policy — approved tools, permitted data, human oversight requirements
- Assign legislative tracker for federal AI bill progress
- Do not pause state compliance work pending federal preemption outcome
- Begin tracking third-party AI audit frameworks in case annual audit requirement passes
Implementation Steps
- Day 1-3: Pull every AI tool used across the organization. Classify each by the type of decision it supports — administrative, customer-facing, operational, consequential.
- Week 1: Apply the Colorado AI Act high-risk checklist to each consequential AI system. Flag any that touch covered domains (employment, education, housing, financial services, healthcare).
- Week 2: For flagged systems, draft a risk assessment using NIST AI RMF's MAP function as a template. Document what could go wrong, how likely it is, and what you are doing about it.
- Week 3: Verify vendor contracts include the documentation and indemnification terms you need. Escalate gaps to legal or procurement.
- Month 2: Implement human review mechanisms for any high-risk AI decisions that lack them. Define what "meaningful human review" means operationally — it requires real authority to override, not rubber-stamping.
- Ongoing: Subscribe to a reliable AI regulatory tracker (IAPP, Future of Privacy Forum, state attorney general newsletters) and review weekly.
Frequently Asked Questions
Q: Should we comply with the Colorado AI Act if we are a small startup? A: If you deploy AI systems that make or materially contribute to consequential decisions affecting Colorado residents, the Colorado AI Act applies regardless of your company size. The statute does not include a small business exemption. That said, enforcement risk scales with materiality and impact — a documented, good-faith compliance effort substantially reduces regulatory exposure.
Q: What counts as a "high-risk AI system" under the TRUMP AMERICA AI Act? A: The discussion draft has not published its final definition. Most current proposals follow the Colorado/EU approach: AI systems making or materially influencing consequential decisions in domains including employment, education, housing, financial services, healthcare, and access to essential services.
Q: Can we be liable for an AI tool we did not build? A: Yes. As a deployer, you take on obligations under both state laws and the proposed federal framework. Using a vendor's AI tool does not transfer liability; it means you share it. Vendor due diligence, contractual protections, and monitoring the AI in your deployment context are all deployer responsibilities.
Q: How quickly could a federal AI law actually pass? A: Historical precedent suggests comprehensive technology regulation takes 2-4 years from first major legislative draft to enactment. The TRUMP AMERICA AI Act is a discussion draft. A floor vote before the 2026 midterms would be unusually fast. Realistic estimate for a signed federal AI law: 2027 at the earliest, more likely 2028.
Q: What if we operate only in states with no AI law? A: You still face federal obligations (FTC Act, EEOC, SEC if applicable), contractual risk with customers who are in regulated states, and growing reputational expectations. The safest approach is to build governance practices that would satisfy the Colorado AI Act as a baseline — it is the most operationally detailed state law currently in force.
References
- White House National Policy Framework for Artificial Intelligence (Holland & Knight analysis, March 2026): https://www.hklaw.com/en/insights/publications/2026/03/white-house-releases-a-national-policy-framework-for-artificial
- TRUMP AMERICA AI Act discussion draft analysis (Latham & Watkins): https://www.lw.com/en/insights/trump-administration-takes-major-steps-toward-comprehensive-federal-ai-regulation
- How the Federal AI Regulation Push Could Impact Your Business (KJK Law, April 1, 2026): https://kjk.com/2026/04/01/how-the-federal-ai-regulation-push-could-impact-your-business/
- NIST AI Risk Management Framework 1.0: https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf
- Colorado AI Act — SB 24-205 full text: https://leg.colorado.gov/bills/sb24-205