California’s AI safety bill has proven the axiom that politics makes strange bedfellows.
Former US House Speaker Nancy Pelosi, US Rep. Ro Khanna (D-Calif.), Andreessen Horowitz and OpenAI, to name a few, all are united in opposition against California’s SB 1047, a first-of-its-kind legislation that would impose safety mandates for large language models that underpin generative AI applications like OpenAI’s ChatGPT and Anthropic‘s Claude.
Proponents of the bill include Anthropic, Elon Musk, The Center for AI Safety, as well as academic researchers Geoffrey Hinton and Yoshua Bengio, who have been dubbed the “godfathers of AI.” They contend that the bill offers a “light touch” and that it’s better to get ahead than wait for something bad to happen.
SB 1047 has laid bare the divisions within the AI community. OpenAI’s CEO and co-founder Sam Altman opposes the bill despite advocating for AI regulation on the federal level. The bill passed shortly before a $1 billion round for Safe SuperIntelligence, the new AI safety startup founded by former OpenAI researchers Ilya Sutskever and Daniel Levy.
These are major sticking points of the legislation: A mandate that companies training large language models costing more than $100 million must develop and implement safety audits and build a shut-off switch for their models. The mandate also applies to models that cost $10 million or more to fine-tune. Most controversially, the bill imposes civil liability on developers for any harm their models cause.
The bill sailed through the state Legislature and now sits on the desk of Gov. Gavin Newsom, who could still veto amid a mounting pressure campaign.
I spoke to AI investors to understand the opposition to SB 1047, which some venture investors and founders see as premature, unscientific and part of a “doomer” agenda to limit tech.
Too soon
Pear VC partner Arpan Shah told me he’s not opposed to regulating AI, but thinks it’s a mistake to do so now because generative AI is still changing rapidly.
“I feel bad for the regulators. They are well-intentioned,” he said. “But it’s hard to be specific when the ecosystem is evolving. We don’t know all the technologies. We don’t know all the mechanisms.”
Shah, whose firm has invested in startups like the AI code testing company Copybara, likened the current AI environment to a moving target, where innovation is changing the landscape so quickly it’s hard to know where the line should be drawn. He said much of the fear motivating SB 1047 comes from an overestimation of what AI can do and of use cases that are non-existent today.
What does worry him is how the bill will impact the VC community’s ability to invest in the startups developing and tweaking large language models.
“The calculus is not that these startups become not fundable,” Shah said. “The calculus is that only so many of these startups can be funded because the cost to run these businesses will be higher—it leads to more entrenchment of incumbents.”
NfX partner James Currier, whose investments include the AI photo editing startup ImagenAI, also believes the cost of compliance is a burden that larger companies can shoulder—but not startups.
“Everyone feels like regulation is the responsible thing to do, but in fact what it’s doing is solidifying Google and Facebook and Amazon and Microsoft‘s ability to dominate this market going forward,” he said.
A better way to regulate AI, according to Sierra Ventures partner Vignesh Ravikumar, whose portfolio includes the AI customer relations platform Balto, is to allow domain-specific governing bodies to set the rules, like the SEC or Treasury Department.
While the law doesn’t change how Ravikumar invests, he is concerned about AI being unleashed in verticals like fintech, where harmful practices like predatory lending could be turbo-charged. Ravikumar said that being too broad-based impedes startups looking to innovate.
“If you can find a way to protect against the Skynet scenario, great. But I think when you start getting too deep into the nitty gritty, it just starts to add a lot of overhead for companies just trying to leverage and build with this technology.”
The Skynet scenario
The concerns about AI becoming a malevolent overlord is a huge force driving SB 1047 forward. It’s also disingenuous in the eyes of many opponents.
Sharon Zhou, co-founder and CEO of the model fine-tuning startup Lamini, said that anxiety surrounding AI is overblown and isn’t a valid reason to regulate the landscape.
“Honestly, I find it a little bit silly that people think this thing can become God. This is technology. We do know how it works,” Zhou said. She told me the way the bill has been framed to lawmakers has been dishonest and part of a “doomer” agenda.
“One-hundred percent from the beginning fearmongering has been a part of this,” she said.
As Lamini works exclusively with open-source models, fine-tuning their usage for enterprise clients, Zhou says SB 1047 introduces liability and roadblocks to her startup that would make operation more difficult. She called the legislation “an existential threat” to her startup’s survival.
That liability rests solely on companies like Meta that develop and distribute the popular open-source model Llama, according to the bill. Lamini works closely with Meta’s model and still fears punishment for its role in tweaking and fine tuning large language models.
The bill’s author, California Sen. Scott Weiner, a Democrat, has said at a press conference that startups like Lamini would not be held liable for using open-source models.
Meaningless math
Several investors and founders also told me that the regulatory framework governing SB 1047 is flawed and unscientific.
Ion Stoica, who co-founded Databricks and the AI fine-tuning orchestration startup Anyscale, says the bill makes all kinds of incorrect suppositions, particularly by tying development costs to model ability.
“It’s not anchored in science,” he said. “It basically equates to the fact that if you have something dumb, it’s safer, and that makes little sense. In what industry does it cost less to build safer artifacts? I mean, look what happened with Boeing!”
Stoica points out that development costs could come down, making the monetary caps meaningless. But more worrisome to him is what the bill could mean for academic research, which ultimately flows down to provide the basis for new technologies and startups.
A professor at University of California, Berkeley, Stoica cautioned that the bill could trigger a brain drain of talent away from California. The result could mean the US giving up the technological advantage it has built up.
“If you introduce a little bit of friction, then that’s enough to change the equation. There’s a probability that you are going to lose the edge we have,” he said. “That edge—we are not far away from losing it.”
Correction: An earlier version of this article incorrectly spelled Geoffrey Hinton’s name. (Sept. 9 2024)
Featured image by Jenna O’Malley/PitchBook News