Texas TRAIGA Compliance Checklist 2026 — Responsible AI Governance Act
Texas RAIGA (the Responsible AI Governance Act, signed June 22, 2025) took effect January 1, 2026. It applies to every developer and deployer of AI systems used by Texas residents, regardless of where your company is based.
TRAIGA at a glance:
| Element | Details |
|---|---|
| Effective date | January 1, 2026 |
| Who it covers | Developers + deployers using AI with Texas residents |
| Liability standard | Intent-based — must prove intentional misconduct |
| Safe harbor | Substantial NIST AI RMF compliance |
| Enforcement | Texas AG only (no private right of action) |
| Notice to cure | Required before enforcement action |
| Penalties | $10,000–$200,000 per violation |
Step 1: Determine if TRAIGA applies to you
TRAIGA applies if you:
- Develop AI systems deployed to Texas residents
- Deploy AI systems used by Texas residents
- Conduct business in Texas using AI systems
- Market AI systems into Texas
Out of scope: Academic research, national security applications, and AI systems used solely for internal testing without consumer interaction.
Step 2: Prohibited uses — absolute prohibitions
TRAIGA prohibits developing or deploying AI systems with intent to:
| Prohibited use | Notes |
|---|---|
| Behavioral manipulation toward self-harm, harm to others, or criminal activity | "Intent" required — accidental misuse is different from designed manipulation |
| Discrimination against protected classes | Unlawful discrimination, not disparate impact per se |
| Infringement of constitutional rights | Applies to state actors; private entities: follow your anti-discrimination obligations |
| Non-consensual deepfakes of identifiable individuals | Video, audio, image — sexual content and otherwise |
| Child sexual abuse material (CSAM) | Absolute prohibition regardless of intent standard |
| Encouraging or facilitating serious criminal activity | Covers AI-assisted fraud, cyberattacks, and similar |
Red-flag checklist:
- Have you reviewed your system's outputs for manipulation pathways?
- Do your terms of service explicitly prohibit deepfake generation of real individuals?
- Are CSAM filters implemented at the model layer, not just the UI layer?
Step 3: Developer obligations
If you build AI systems and license or sell them to deployers:
- System documentation package: description, intended uses, out-of-scope uses
- Risk disclosure: known failure modes, performance limitations, hallucination rates where measurable
- Data provenance: training data sources and known gaps
- Deployment guidance: recommended safeguards, minimum infrastructure requirements
- Performance benchmarks: accuracy, false positive/negative rates for the intended use case
- Support contact: point of contact for deployer questions and incident reporting
- Update notifications: process for notifying deployers of material changes to the system
This documentation must be sufficient for a deployer to conduct their own impact assessment without specialised ML knowledge.
Step 4: Deployer obligations
If you use AI systems in a product or service that reaches Texas residents:
Impact assessment
- Conduct an AI impact assessment before deploying a new AI system or materially changing an existing one
- Document: intended use, affected populations, potential harms (physical, financial, reputational, privacy)
- Assess likelihood and severity of identified harms
- Document mitigations and residual risk acceptance
Governance program
- Assign responsibility for AI oversight to a named role
- Document AI systems in an inventory (see AI tool register template)
- Establish a process for receiving and acting on user complaints about AI decisions
- Implement human oversight for high-stakes decisions (credit, employment, healthcare, housing)
Consumer notices
- Disclose to consumers when they are interacting with an AI system (before or at time of interaction)
- Disclose when AI-generated content is presented as factual (deep fakes, synthetic media)
- For government deployers: disclosure is required even when AI nature would be obvious
Ongoing monitoring
- Monitor deployed systems for drift and unexpected behavior
- Log material incidents and near-misses
- Re-run impact assessments when the system is materially updated
Step 5: NIST AI RMF safe harbor
Substantial compliance with the NIST AI Risk Management Framework (RMF 1.0) is an affirmative defense against TRAIGA enforcement. The Texas AG cannot bring an action against an organisation that demonstrates good-faith, documented NIST AI RMF compliance.
To qualify for the safe harbor:
- Formally adopt the NIST AI RMF — document the adoption decision
- Complete GOVERN, MAP, MEASURE, and MANAGE functions for material AI systems
- Maintain records of assessments, results, and actions taken
- Make records available to the Texas AG on request
Note: Citing NIST without documented implementation does not qualify. The AG will ask for evidence.
Step 6: Enforcement and penalties
| Element | Detail |
|---|---|
| Who enforces | Texas Attorney General (exclusive — no private right of action) |
| Notice requirement | AG must provide written notice and opportunity to cure before filing |
| Cure period | Specified in the notice (typically 30–60 days) |
| Penalty range | $10,000–$200,000 per violation |
| Escalation | Repeat violations after cure notice carry higher penalties |
| Market action | AG can seek injunctions requiring a system to be taken offline |
What triggers enforcement: Complaints from Texas residents or agencies, AG-initiated investigations, and referrals from other state regulators. Intent to harm or discriminate must be demonstrable — accidental harms alone do not trigger liability.
Federal preemption note
The Trump administration's December 2025 executive order directed a federal AI framework intended to preempt inconsistent state laws. As of May 2026, no federal statute implementing preemption has been enacted. TRAIGA is in force and the Texas AG is enforcing it. Consult counsel if you are operating under an assumption that federal preemption protects you — it does not yet.
Practical next steps for small teams
- Run the scope check (Step 1) — confirm whether TRAIGA applies at all
- Audit your AI inventory against prohibited uses — document the review
- Download the NIST AI RMF 1.0 and formally adopt it — the safe harbor is worth the paperwork
- Add consumer notices to any AI-facing interfaces — one banner, one disclosure, minimal friction
- Store your impact assessments in your policy repository alongside your vendor contracts
Related reading
- AI risk assessment for small teams
- AI vendor due diligence in 30 minutes
- Connecticut AI law 2026 — another active state AI law
- Colorado AI Act SB 189 rewrite — Colorado's revised framework
References
- Texas Legislature — Responsible Artificial Intelligence Governance Act (HB 149)
- Norton Rose Fulbright — The Texas Responsible AI Governance Act: What your company needs to know
- Baker Botts — Texas Enacts Responsible AI Governance Act: What Companies Need to Know
- IAPP — Texas Responsible AI Governance Act compliance: A sample policy framework
