As California Governor Gavin Newsom approaches the September 30 deadline to sign or veto SB 1047, the AI safety bill is stirring intense debate. The bill, which would establish the most comprehensive AI safety law in the U.S., has drawn vocal support from advocacy groups while facing fierce opposition from tech giants and investors. Newsom now faces a critical decision that could shape the future of AI governance not just in California but nationwide.
Pressure from Advocacy Groups
Support for the bill is coming from high-profile organizations, including SAG-AFTRA, the National Organization for Women (NOW), and Fund Her. These groups have sent letters to Governor Newsom urging him to sign the bill, warning about the potential dangers of unregulated AI.
SAG-AFTRA, the performers’ union, emphasizes the risks AI poses to various sectors, particularly the entertainment industry, where AI could be used to exploit intellectual property or undermine labor rights. Their letter also highlights SB 1047’s focus on mandating AI developers to test for and mitigate potential disasters, such as AI-triggered cyberattacks or the creation of bioweapons. NOW and Fund Her argue that unregulated AI could disproportionately harm vulnerable populations, amplifying issues such as discrimination and inequality.
Industry Pushback
While advocacy groups are pushing for regulation, the tech industry is mounting a significant counter-campaign. Companies like Google, Meta, and OpenAI, as well as major investors including Y Combinator (YC) and Andreessen Horowitz (a16z), have voiced strong opposition to SB 1047. Their concerns revolve around the bill’s potential to stifle innovation and create a regulatory burden that could drive AI companies out of California.
One of the key points raised by these opponents is the fear that the bill could push AI development to countries with more lenient regulations, such as China, leading to a loss of U.S. leadership in AI. They also argue that SB 1047 could damage the open-source community by imposing strict liability on AI developers, potentially hindering collaboration and progress.
The Stakes of SB 1047
Authored by state Senator Scott Wiener, SB 1047 would hold developers of next-generation AI models accountable for disasters caused by their technologies if they fail to implement proper safeguards. It includes civil liability for developers and whistleblower protections for employees of AI companies, a provision that has garnered support from OpenAI whistleblowers Daniel Kokotajlo and William Saunders.
The bill is seen as a landmark piece of legislation that could set a precedent for AI regulation in the U.S. Currently, the country relies primarily on voluntary commitments and self-regulation from the AI industry. If signed into law, SB 1047 would break new ground by introducing enforceable safety standards, shifting the balance of power between tech companies and regulatory bodies.
Supporters of the bill, including some within the tech community, argue that these safeguards are necessary to prevent AI-enabled catastrophes. Dario Amodei, CEO of AI company Anthropic, has called the threats from companies to leave California “just theater,” pointing out that many AI firms already do business in California, the world’s AI hub.
Newsom’s Dilemma
For Governor Newsom, the decision is a complex one. On one side, he faces mounting pressure from advocacy groups who warn of the risks of unregulated AI. On the other, powerful tech companies and investors argue that the bill could harm the state’s economy and push innovation elsewhere.
As one of the most influential states in AI development and a major global economy, California’s decision on SB 1047 will have ripple effects across the tech industry. If Newsom signs the bill, it could lead to stricter AI regulations nationwide, influencing future federal legislation. If he vetoes it, California may remain reliant on self-regulation, leaving the door open for AI companies to operate without formal oversight.
State Senator Scott Wiener remains optimistic, stating, “My experience with Gavin Newsom is — agree or disagree — he makes thoughtful decisions based on what he thinks is best for the state.”
What’s Next?
The future of AI safety in California and beyond now rests in the hands of Governor Newsom. With a decision expected by September 30, the outcome of this debate will likely set the stage for how the U.S. approaches AI regulation in the years to come.
As both sides await Newsom’s decision, one thing is clear: the stakes are high, and the future of AI governance is on the line.