Loading…
Loading…
Voluntary commitments by leading AI companies (OpenAI, Anthropic, Google, Microsoft, Meta, and others) to the EU and US governments, covering safety testing before model release, transparency about capabilities, red-teaming, and sharing safety information with governments. Not legally binding.
These voluntary commitments apply to the AI companies that signed them, not to you as a user of their APIs. Their relevance is that they represent baseline safety practices your AI vendors have committed to — including pre-release safety testing and red-teaming. You can use these commitments as a baseline when conducting vendor due diligence: ask whether your AI vendor has signed similar commitments and what safety testing they conduct.