TAKE IT DOWN Act Compliance Checklist 2026 — NCII and Deepfake Removal Requirements
The TAKE IT DOWN Act (signed April 28, 2025) created a federal obligation that did not previously exist: online platforms must remove non-consensual intimate images — including AI-generated deepfakes — within 48 hours of a valid victim request.
At a glance:
| Element | Detail |
|---|---|
| Law signed | April 28, 2025 |
| Enforcer | FTC (Section 5 authority) |
| Removal window | 48 hours from valid request |
| Scope | NCII and AI-generated deepfakes of real identifiable people |
| Penalty | Up to $50,000/violation/day |
| Company size exemption | None |
Step 1: Determine if you are in scope
The TAKE IT DOWN Act applies to "online platforms" — broadly defined as any service that:
- Hosts user-generated content (images, video, audio)
- Allows users to upload, share, or transmit intimate images
- Operates an AI image or video generation tool accessible to the public
- Provides cloud storage or CDN services where intimate content could appear
Definitively in scope:
- Social media platforms (Instagram, X, Facebook, Reddit, etc.)
- Adult content platforms regardless of consent-verification model
- AI image generation tools (Stable Diffusion wrappers, custom model deployments)
- Messaging platforms with image-sharing features
Gray zone — likely in scope:
- Developer API services that could be used to distribute NCII
- Cloud storage with public sharing features
- Search engines that index and cache images
Likely out of scope:
- Internal business tools with no public-facing content
- Email providers (private communication, not public distribution)
- Read-only content platforms (no user uploads)
Step 2: The 48-hour removal obligation
The core obligation: once a valid request is received, the covered content must be:
- Removed from the platform within 48 hours
- Blocked from re-upload — technical measures must prevent the same content from being re-posted
- Preserved in records — removal action documented for potential FTC review
The 48-hour window starts at receipt of a valid request, not at the time the content was posted. A request received Friday at 11pm must be processed by Sunday at 11pm.
What counts as a valid request
A valid request must:
- Come from the depicted person (or parent/guardian for minors)
- Identify the specific content (URL, description, or enough information to locate it)
- State that the person did not consent to the image being distributed
- Be submitted through the platform's designated reporting mechanism
Platforms are not required to independently verify the identity of requesters beyond reasonable steps. A false report by a third party shifts liability to the reporter, not the platform.
Step 3: Build the required intake mechanism
The FTC expects a designated, functional reporting mechanism. This is not optional — the absence of a mechanism is itself a violation.
Minimum viable intake mechanism:
- Dedicated email address for NCII reports (e.g., [email protected]) — or a web form
- Auto-acknowledgment confirming receipt with a ticket number and 48-hour timeline
- Internal queue that routes to a designated reviewer within 1 hour of receipt
- 24/7 processing capability — "business hours only" does not satisfy the 48-hour window
Recommended additions:
- Identity verification step (government ID upload or trusted ID verification service)
- Escalation path for urgent cases (minors, ongoing harassment)
- Clear instructions in plain language on how to submit a request (accessible from your help center and privacy policy)
Step 4: Technical re-upload prevention
Removing the image is not sufficient — the platform must implement measures to prevent the same content from being re-posted.
| Approach | How it works | Limitation |
|---|---|---|
| Hash-matching (PhotoDNA, PDQ) | Generate perceptual hash of removed image; block future uploads matching the hash | Doesn't catch re-encoded or moderately cropped versions |
| NCMEC CyberTipline integration | Submit CSAM/NCII hashes to NCMEC database; block against their hash set | Primarily for CSAM; NCII coverage is growing |
| Content credentials (C2PA) | Track provenance of AI-generated images | Only works if images carry credentials |
| AI-based detection | Train classifier to detect similar content | High false-positive rate without hash anchor |
Practical minimum: Implement hash-matching on all removed NCII. PhotoDNA is available via Microsoft Azure Content Safety. PDQ is open source (Facebook/Meta). At minimum, block re-upload of the exact file and obvious hash variants.
Step 5: Obligations for AI image generation tools
If you operate an AI image generation tool (image-to-image, text-to-image, face-swap, video synthesis), additional obligations apply:
- No-consent generation prohibition: Do not generate intimate images of real identifiable people without documented consent. This is the use case the law is specifically targeting.
- Face detection safeguards: If your tool accepts face images as input, implement checks that flag likely real-person inputs for intimate content generation
- Output watermarking: Embed machine-readable marks in generated content (also required under EU AI Act Article 50 from December 2026)
- Terms of service prohibition: Explicitly prohibit using your tool to generate NCII — this establishes that the platform does not facilitate violations
Step 6: Records and documentation
Keep records sufficient to demonstrate compliance to the FTC on request:
- Log of all NCII requests received — timestamp, ticket ID, content identifier
- Log of removal actions — timestamp of removal, URL(s) removed
- Log of re-upload prevention measures activated
- Communication records with reporters (acknowledgment sent, status updates)
- Records of requests denied and reason (e.g., content not located, invalid request)
Retention: Minimum 2 years. Some state laws require longer.
Penalties and enforcement
The FTC has stated AI-generated NCII is a priority enforcement area alongside Operation AI Comply. Key enforcement parameters:
| Element | Detail |
|---|---|
| Penalty per violation | Up to $50,000/day |
| Each unremediated image | Separate violation |
| Injunctive relief | FTC can restrict platform operations |
| State enforcement | State AGs can also act under state NCII laws |
| Criminal liability | Individual perpetrators (not platforms) face criminal penalties under the Act |
The FTC can act without a prior private lawsuit. Enforcement is initiated by victim complaints or FTC-initiated investigations. Being reported by a victim and failing to respond within 48 hours is sufficient to trigger an investigation.
State NCII laws — additional obligations
Federal law is a floor. Most states have NCII statutes that may impose stricter requirements:
| State | Law | Key addition |
|---|---|---|
| Texas | Tex. Penal Code § 21.16 | Covers AI-generated deepfakes explicitly; no-consent standard |
| California | Cal. Civ. Code § 1708.85 | Civil right of action (private lawsuits) |
| New York | N.Y. Civ. Rights Law § 52-c | Civil action; $1,000 minimum damages |
| Virginia | Va. Code § 18.2-386.2 | Criminal penalty for distribution |
| Illinois | 720 ILCS 5/11-23.5 | Covers AI-generated content; civil and criminal |
Platforms operating in Texas must also comply with Texas TRAIGA's non-consensual deepfake prohibition, which applies regardless of intent standard.
EU parallel — December 2026 deadline
The EU AI Act Omnibus agreement (May 7, 2026) adds an overlapping obligation for EU-market operators: the nudification ban applies from December 2, 2026 to AI systems whose primary purpose is generating sexually explicit images of real people. If you operate in both markets, align your TAKE IT DOWN Act compliance (48-hour removal) with the EU's December deadline for a unified compliance program.
Practical next steps for small teams
- Audit whether you are in scope — hosting user images = likely in scope
- Stand up an ncii-reports@ inbox today — this is the minimum required mechanism
- Configure hash-matching on your content upload pipeline — Azure Content Safety or open-source PDQ
- Add NCII prohibition to your terms of service — shifts liability and establishes platform policy
- Train your content moderation team on the 48-hour window — SLA must be operational 24/7
- Document everything — the FTC will ask for logs
References
- Congress.gov — TAKE IT DOWN Act (S. 146)
- White House — President Trump Signs the TAKE IT DOWN Act
- FTC — Non-consensual intimate imagery enforcement
- National Center for Missing and Exploited Children — PhotoDNA
