California Gov. Gavin Newsom has vetoed the controversial AI safety bill SB 1047, sending regulation of tech’s hottest industry back to square one.
The legislation opened deep rifts in the tech community—pitting the likes of Elon Musk, Anthropic, and much of Hollywood against Sam Altman, Nancy Pelosi, and large VC firms like Andreessen Horowitz. The move is being celebrated by big tech companies, startups and much of the VC establishment, who feared the bill would stifle innovation.
“I take seriously the responsibility to regulate this industry,” Governor Newsom wrote in a letter to California’s State Senate explaining his decision. “[But] I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
With regulation effectively back to the drawing board, the bill’s opponents are wasting no time laying out their vision of AI legislation. Meanwhile, proponents of the bill vow to fight on.
AI regulation focused on high-risk industries
Bobby Franklin, president and CEO of the National Venture Capital Association, called SB 1047 “a misguided bill that would have had a chilling effect on AI research and development.” He argued that broad regulation like SB 1047 would hinders rather than help companies.
Instead, lawmakers should focus on regulating specific applications in high-risk industries, Franklin said. This includes verticals like healthcare, where privacy and accuracy are vital to an individual’s well-being. AI is also creeping into critical utilities like grid infrastructure management for towns and cities, a sector where any error could be catastrophic.
Sweeping legislation like SB 1047 diverts attention away from specific areas where regulation truly matters and hurts the entire venture ecosystem, according to Franklin.
“Broad policies harm the flywheel effect that drives innovation and growth,” he said.
Protection for smaller players
The bill, proposed by state Sen. Scott Wiener—whose district encompasses San Francisco and some of its outer-lying areas— would have mandated safety audits and imposed civil liabilities on the developers of large language models, the technology underpinning generative AI applications. SB 1047 also mandated the implementation of an emergency “kill switch” that could turn off a model when necessary.
The legislation only applied to models that cost $100 million totally to train or those fine-tuned at a cost of more than $10 million.
These requirements equally drew the ire of big tech companies and the open-source community.
Both were concerned over additional costs related to compliance, while startups using open-source LLMs were wary of legal liabilities that could have complicated their operations.
Meta, the company behind the popular Llama open-source model, applauded Governor Newsom’s veto. The bill “would have broken the state’s long tradition of fostering open-source development,” a spokesperson told PitchBook in a statement.
Arpan Shah, a partner at Pear VC who had previously voiced opposition to the bill, said that SB 1047 too heavily favored larger companies that could afford to comply, but not the smaller startups working with open-source models. Any regulation, he said, has to be fair to model developers big and small.
“This opens the door to more thoughtful discussions and a broader-based consensus that helps enable open-source to flourish alongside closed models... both of which are needed for a healthy ecosystem,” Shah said.
Avoid regulating the unknown
Some opponents had been arguing that SB 1047’s greatest danger was regulating generative AI too early in its lifecycle. Investors like NFX partner James Currier said that much of what animated the bill was fear of the unknown. He suggested that focusing on regulatory needs as they emerge is better than being too reactive to fear.
“The hardest thing for people to hear is that we don’t know—it’s okay that we don’t know, but we need to keep thinking deeply, ” he said. “Let’s not pretend we do know prematurely.”
But proponents of SB 1047 say the bill would have at least provided a starting point for regulatory change that could have been incrementally improved.
According to Peter Guagenti, president of Tabnine, an AI model developer for code completion, the veto only kicks the can further down the line. The longer the wait, the more harm will come to the entire ecosystem, Guagenti said.
“It’s a lot easier to make smaller regulatory changes early in an industry and iterate,” Guagenti said. “We’re just going to have to deal with some much more dramatic, larger change at some point in the future that hurts your profits and your business way more than smaller changes over time.”
State Sen. Wiener said the veto represents a missed opportunity to take the lead in AI safety. In a statement, he vowed to fight on:
“The veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public,” Wiener wrote. “California will continue to lead in that conversation—we are not going anywhere.”
Featured image by Thomas Winz/Getty Images