Published May 14, 2026. The TAKE IT DOWN Act was signed by President Trump on May 19, 2025. One year in, many platforms still lack a process that meets the 48-hour requirement. This article covers what compliance actually looks like.
The TAKE IT DOWN Act became federal law May 19, 2025 — the first US federal law to directly address AI-generated non-consensual intimate imagery. Twelve months later, the most common failure mode is not malice. It is platforms that intended to build a takedown process and have not finished it. The 48-hour requirement is operationally harder than it looks: intake, identity verification, content location, removal, confirmation, re-upload prevention — all inside two days, including weekends.
What the Law Does
The TAKE IT DOWN Act has two distinct parts:
| Part | What it covers | Who it applies to |
|---|---|---|
| Platform takedown obligation | Platforms must remove NCII (including AI deepfakes) within 48 hours of verified request | Covered online platforms with US users |
| Publication prohibition | Publishing, distributing, or transmitting NCII or AI-generated CSAM is a federal crime | Individuals and entities; no platform threshold |
The 48-Hour Removal Requirement
This is the core operational challenge for covered platforms.
What triggers the clock: A verified takedown request from the person depicted (or their legal representative) identifying non-consensual intimate imagery depicting them.
What must happen in 48 hours:
- Receive and verify the request
- Locate the reported content
- Remove it from the platform
- Prevent re-upload (reasonable steps)
- Confirm removal to the requester
What "verified" means in practice: The law does not specify an exact verification standard. Platforms need a process that confirms the requester is the person depicted or authorized to act for them. This does not require government ID — but a process that accepts anonymous requests without any verification would likely not satisfy the statute.
Who Is a Covered Platform
| Platform type | Coverage |
|---|---|
| Social media platforms | Yes |
| Video hosting platforms | Yes |
| Image hosting and sharing services | Yes |
| AI content generation platforms | Yes — if users can share generated content |
| Adult content platforms | Yes |
| Messaging apps with public content features | Yes |
| Small single-operator websites | Lower risk — consult counsel |
| Private messaging (no public content) | Not covered for takedown obligation |
AI-Specific Obligations
The law explicitly covers AI-generated NCII. Platforms that offer or enable AI content generation have additional exposure:
| Scenario | Obligation |
|---|---|
| Platform allows users to generate AI images/video | Must process takedown requests for AI-generated NCII |
| Platform uses AI to generate content of real people | Publication prohibition applies to the platform as publisher |
| Platform hosts user-uploaded AI-generated content | Must remove on verified request within 48 hours |
| Platform provides AI tools that can produce NCII | Proactive moderation expected (FTC has broad Section 5 authority) |
7-Point Compliance Checklist
- 1. Determine coverage — assess whether your platform hosts user-generated content accessible to US users; consult counsel if coverage is unclear
- 2. Build a takedown request intake process — a dedicated form, email address, or in-app mechanism for NCII takedown requests; document the identity verification steps
- 3. Establish 48-hour SLA with monitoring — map the end-to-end process (intake → verification → content location → removal → confirmation) and measure current latency; the 48-hour window is hard
- 4. Implement re-upload prevention — use perceptual hashing or similar tooling to detect re-uploads of removed NCII; document the technical approach
- 5. Designate a responsible contact — identify the person or team accountable for NCII takedown compliance; this should be reachable within the 48-hour window, including weekends
- 6. Update your content policies — explicitly prohibit NCII and AI-generated NCII in your Terms of Service; add to your prohibited content list
- 7. Brief legal and trust-and-safety teams — the Act's requirements and the FTC enforcement risk should be communicated to anyone who handles content policy or content moderation
How This Interacts with State Laws
The TAKE IT DOWN Act is a federal floor, not a ceiling. Several state laws provide parallel or stronger protections:
| Law | What it adds beyond TAKE IT DOWN Act |
|---|---|
| Washington AI likeness law (effective June 10, 2026) | Private right of action — victims can sue creators and distributors directly |
| California NCII law | Existing state law; private right of action |
| Texas deepfake law | Covers electoral deepfakes; criminal penalties |
| New York deepfake law | Covers sexual and electoral deepfakes |
The TAKE IT DOWN Act does not preempt state laws. Platforms need to comply with the federal Act AND applicable state laws.
FTC Enforcement
The FTC enforces the TAKE IT DOWN Act under its Section 5 authority. Platforms that:
- Fail to establish a takedown process
- Fail to remove content within 48 hours after a verified request
- Allow re-uploads of reported content without reasonable prevention
...are exposed to FTC enforcement and civil penalties up to $53,088 per violation per day.
The law gave covered platforms one year to implement compliant notice-and-removal procedures — a deadline that expired on May 19, 2026. Platforms that have not yet built a compliant process are now exposed to immediate FTC enforcement. The FTC has been actively expanding AI enforcement (Operation AI Comply), and TAKE IT DOWN Act enforcement is expected to be a priority given the law's bipartisan support and high political visibility.
Sources: TAKE IT DOWN Act (signed May 19, 2025), FTC press releases, Congressional record. This article reflects the law as signed; FTC implementation guidance may add requirements. Check ftc.gov for current enforcement actions and guidance.
