Thursday, September 11, 2025

Anthropic Breaks Ranks and Backs California’s AI Safety Bill SB 53

Share

- Advertisement -
  • Anthropic endorsed SB 53, California’s landmark AI safety bill, despite tech lobbying against it.
  • The bill would require transparency, safety frameworks, and whistleblower protections for big AI companies.
  • Critics argue state-level laws hurt innovation and may violate the Constitution’s Commerce Clause.
  • Governor Gavin Newsom has yet to take a position, after vetoing a prior AI safety bill in 2024.

Anthropic Breaks with Industry Rivals

In a move that could reshape the debate over artificial intelligence regulation, San Francisco–based Anthropic has thrown its support behind SB 53, a California bill designed to hold the largest AI developers accountable for how they deploy their most advanced models.

The endorsement, announced Monday, marks a rare split in the tech industry. While groups like the Consumer Technology Association and Chamber for Progress have spent months lobbying against the bill, Anthropic called SB 53 “a solid path” toward responsible AI oversight.

“Powerful AI advancements won’t wait for consensus in Washington,” the company said in a blog post. “The question isn’t whether we need AI governance, it’s whether we’ll develop it thoughtfully today or reactively tomorrow.”

The statement highlights a growing divide in Silicon Valley between firms that want to delay regulation and those that see state action as unavoidable.

What SB 53 Would Require

SB 53, authored by state senator Scott Wiener, zeroes in on the world’s biggest AI companies: OpenAI, Anthropic, Google DeepMind, and Elon Musk’s xAI.

The bill would require these developers to:

- Advertisement -
  • Establish formal safety frameworks before releasing their most powerful models.
  • Publish public safety and security reports outlining potential risks.
  • Protect whistleblowers who raise safety concerns.

Importantly, the legislation is aimed at extreme “catastrophic risks.” These are defined as scenarios that could cause at least 50 deaths or more than $1 billion in damage. That includes AI models being misused to develop biological weapons or coordinate cyberattacks.

The bill does not address lower-level risks such as disinformation, deepfakes, or election manipulation. Instead, it focuses narrowly on preventing worst-case scenarios that could cause mass harm.

Pushback from Silicon Valley and Washington

Despite its narrowed scope, SB 53 has attracted strong opposition. Critics argue that state-level AI rules could hamper innovation, drive startups out of California, and risk clashing with federal law.

Andreessen Horowitz, one of Silicon Valley’s most influential venture capital firms, has been among the most vocal opponents. Matt Perault, the firm’s head of AI policy, and Jai Ramaswamy, its chief legal officer, warned last week that bills like SB 53 could run afoul of the Constitution’s Commerce Clause, which limits states from passing laws that affect interstate commerce.

The Trump administration has echoed those concerns, with officials repeatedly warning they could block states from imposing their own AI regulations. Their argument is that a patchwork of state laws could weaken U.S. competitiveness in the global race against China.

Lobbying has also come from inside the AI industry itself. In August, OpenAI’s chief global affairs officer, Chris Lehane, sent Governor Newsom a letter urging him to avoid new AI regulations that could push companies out of the state. Although the letter did not mention SB 53 directly, OpenAI has consistently opposed state-level bills.

- Advertisement -

That position drew backlash from some in the AI policy community. Miles Brundage, OpenAI’s former head of policy research, slammed Lehane’s letter as “filled with misleading garbage about SB 53 and AI policy generally.”

From SB 1047 to SB 53: A Shift in Strategy

This is not Senator Wiener’s first attempt to legislate AI. Last year, his earlier proposal, SB 1047, was vetoed by Governor Newsom following heavy lobbying from investors and companies who argued it went too far.

SB 53 reflects lessons from that defeat. The new bill is narrower, with clearer definitions and fewer sweeping requirements. Earlier drafts included a mandate for third-party audits of AI companies, but lawmakers dropped that provision after pushback from the industry.

Dean Ball, a senior fellow at the Foundation for American Innovation and a former White House AI adviser, said this summer that the bill shows “respect for technical reality” and “a measure of legislative restraint.” Unlike SB 1047, he argued, SB 53 has a real chance of becoming law.

Wiener himself has said the bill was shaped by an expert panel convened by Newsom, which included Stanford professor and AI researcher Fei-Fei Li. That panel recommended guardrails to reduce catastrophic risks from the largest AI models.

Why Anthropic’s Support Matters

Anthropic’s endorsement carries weight because it is one of the very companies that SB 53 would regulate. Alongside OpenAI and Google, it is a leading developer of large-scale frontier models.

- Advertisement -

Jack Clark, Anthropic’s co-founder, explained the company’s position in a post on X. “We have long said we would prefer a federal standard,” he wrote. “But in the absence of that, this creates a solid blueprint for AI governance that cannot be ignored.”

Most major AI labs, including Anthropic, already publish voluntary safety reports for their models. But compliance is self-imposed, and companies often slip behind their own commitments. SB 53 would turn those voluntary practices into legal obligations with financial penalties for noncompliance.

By endorsing the bill, Anthropic is signaling that it is willing to accept legally binding oversight, even while its competitors warn of the risks. That move may put pressure on Governor Newsom, who has yet to reveal whether he will support or veto the measure.

What Comes Next

California’s Senate has already approved a version of SB 53. The final vote will determine whether it advances to the governor’s desk. Newsom’s decision could have national consequences: as the state that is home to nearly every major AI lab, California is in a unique position to set the tone for regulation across the country.

If signed, SB 53 would be the first state law in the U.S. specifically aimed at frontier AI safety. It could also spark challenges in court from opponents who claim the law intrudes on federal authority or violates the Commerce Clause.

For now, Anthropic’s support has shifted the momentum in Sacramento. After months of heavy lobbying against the bill, Wiener and his allies can point to one of the industry’s biggest names as proof that the legislation is not incompatible with innovation.

Follow TechBSB For More Updates

- Advertisement -
Rohit Belakud
Rohit Belakud
Rohit Belakud is an experienced tech professional, boasting 7 years of experience in the field of computer science, web design, content creation, and affiliate marketing. His proficiency extends to PPC, Google Adsense and SEO, ensuring his clients achieve maximum visibility and profitability online. Renowned as a trusted and highly rated expert, Rohit's reputation precedes him as a reliable professional delivering top-notch results. Beyond his professional pursuits, Rohit channels his creativity as an author, showcasing his passion for storytelling and engaging content creation. With a blend of skill, dedication, and a flair for innovation, Rohit Belakud stands as a beacon of excellence in the digital landscape.

Read More

Trending Now