
San Francisco, CA – Anthropic has unveiled its Frontier Compliance Framework (FCF), aligning with California's Transparency in Frontier AI Act, effective January 1, 2026. This framework addresses catastrophic AI risks, reflecting escalating concerns over AI safety.
Read more on California's AI transparency law.
The FCF outlines Anthropic's assessments of cyber threats, CBRN incidents, AI sabotage, and control loss. It also covers protective practices for model weights and response to safety threats. Other AI labs, like OpenAI and Google DeepMind, will be required to comply.
Learn about new state AI laws.
All AI developers in California must publish compliance frameworks, updated annually or within 30 days of changes. SB 53 violations may incur up to $1 million fines, enforced by the California Attorney General. Critical incidents must be reported within 15 days, or within 24 hours if they're life-threatening.
SB 53's implementation raises questions about self-published frameworks. Anthropic’s FCF lacks independent verification, and no review by California's Office of Emergency Services was mentioned.
Its tiered system for model evaluation is not fully disclosed, conceivably due to trade secrets.
Anthropic urges a national AI transparency framework, advocating for federal standards on risk management and whistleblower protection. This initiative aims to improve safety practices across the industry.
