Loading…
Loading…
Requires developers of large AI models (those trained using more than 10²⁶ FLOPS) to implement safety testing, whistleblower protections for employees who report safety concerns, and incident reporting for safety-critical failures. Narrower in scope than the vetoed SB 1047.
The FLOP threshold (10²⁶) is extremely high — this law applies to frontier model developers like OpenAI, Anthropic, and Google DeepMind, not to companies building products on top of those models. If you are using APIs from these providers, you are not directly subject to SB 53. However, the law creates indirect accountability: your AI vendors must now have safety testing and incident reporting programs, which improves transparency into the models you rely on.