Anthropic Releases AI Compliance Framework under New California Law

California enforces AI transparency with SB 53. Anthropic leads compliance efforts.
Published: January 2, 2026

California Pioneers AI Regulation

San Francisco, CA – Anthropic has unveiled its Frontier Compliance Framework (FCF), aligning with California's Transparency in Frontier AI Act, effective January 1, 2026. This framework addresses catastrophic AI risks, reflecting escalating concerns over AI safety.

Read more on California's AI transparency law.

FCF Details Announced

The FCF outlines Anthropic's assessments of cyber threats, CBRN incidents, AI sabotage, and control loss. It also covers protective practices for model weights and response to safety threats. Other AI labs, like OpenAI and Google DeepMind, will be required to comply.

Learn about new state AI laws.

Mandatory Compliance Impacts

All AI developers in California must publish compliance frameworks, updated annually or within 30 days of changes. SB 53 violations may incur up to $1 million fines, enforced by the California Attorney General. Critical incidents must be reported within 15 days, or within 24 hours if they're life-threatening.

More on SB 53 compliance.

Addressing Uncertainties

SB 53's implementation raises questions about self-published frameworks. Anthropic’s FCF lacks independent verification, and no review by California's Office of Emergency Services was mentioned.

Its tiered system for model evaluation is not fully disclosed, conceivably due to trade secrets.

The Path Forward

Anthropic urges a national AI transparency framework, advocating for federal standards on risk management and whistleblower protection. This initiative aims to improve safety practices across the industry.

Discover what California's AI safety law entails.