OpenAI founder Sam Altman has become a familiar face on Capitol Hill.
He took the spotlight in a highly publicized hearing in front of the Senate Judiciary Committee earlier this month. The evening before, he had hosted a closed-door dinner with some 60 lawmakers, several of whom were quick to praise his demonstrations of generative AI’s capabilities.
Much more quietly, several other AI startups—as well as some major US VCs—are stretching their muscles on Capitol Hill, buoyed by the wave of legislators’ scrutiny of generative AI.
Just last week, two leading generative AI startups—Stability AI, the company behind the text-to-image tool Stable Diffusion, which is reportedly raising a Series A at a $4 billion valuation, and Hugging Face, which is angling to become the “GitHub of AI"—disclosed new contracts with Washington, DC, lobbying firms. Franklin Square is advising Stability AI and Invariant is advising Hugging Face, according to lobbying disclosures.
Anthropic, a prominent competitor of OpenAI, signed with boutique lobbying firm Aquia Group, as of January of this year and spent $70,000 on lobbying in Q1, according to disclosures filed by Aquia Group and Capitol Hill consultancy Tower 19.
“The founder of Anthropic has been here quite a bit and has done a number of things with the administration,” said Adam Kovacevich, CEO of Chamber of Progress, a center-left tech industry policy and lobbying organization. “I think everyone’s kind of dipping their toes in the water.”
Dissenting opinions
Generative AI companies and their VC backers aren’t all aligned on what they want policy to look like. Some have called for a six-month moratorium on AI development. Others see that development as a moral necessity.
OpenAI and Anthropic both fall into the camp that calls for substantial guardrails on the technology and have been vocal about its existential risks. Anthropic’s chief executive Dario Amodei, as well as Altman and Demis Hassabis, CEO of Google‘s DeepMind, recently signed a statement saying mitigating existential threats posed by generative AI “should be a global priority.”
Another camp is much more optimistic on the benefits of generative AI. Andreessen Horowitz founder Marc Andreessen, who wrote recently that large AI companies “should be allowed to build AI as fast and aggressively as they can,” has gone so far as to argue that stifling generative AI technology would go against US national security interests. Some in this camp have accused figures like Altman of being motivated by self-interest, as AI guardrails could make it nearly impossible for new entrants to develop competitive large language models, essentially entrenching the existing players.
Mounting influence
Altman is following a similar strategy to influential tech leaders like Microsoft‘s Brad Smith, currying favor by positioning himself as a go-to expert and investing significant face time with policymakers.
VC heavyweights are also increasingly expanding their influence in Washington beyond a membership with the National Venture Capital Association, the largest trade association representing the industry. Sequoia and a16z have both built out their in-house policy teams in recent years.
“If they’re trying to do it sort of the Silicon Valley way, they’re trying to probably have more personal conversations and more personal relationships,” said Jason Schniederman, a Perkins Coie attorney, who said he’s noticed more activity from VCs and startups jostling for a seat at the table on Capitol Hill.
Lobbying politicians is already par for the course for AI companies developing defense tech applications. Autonomous drone startup Shield AI and generative AI data firm Scale AI have spent $4 million and $1.5 million respectively on lobbying at the federal level through several lobbying firms, according to disclosures.
And VC firms with a stake in cryptocurrency regulation, like a16z, were also already present on the Hill but have added AI regulation as another topic of interest.
The regulatory landscape is still in its early days. On Wednesday, New York Senator Chuck Schumer announced a sweeping plan to scrutinize generative AI, including nine panels to answer tough policy questions ranging from national security to existential human risk. The panels, which would begin holding sessions in September and include industry representatives, are intended to inform senators as they develop a concrete policy framework.
“What [VC firms have] seen,” said Schniederman, “is if they aren’t aggressively proactive on these really revolutionary technologies, it can preclude them from being the thought leader of the space or ruin their portfolio investments in some sense.”
Correction: This article was updated to correct the spelling of Aquia Group.
Featured image of Sam Altman by Andrew Caballero-Reynolds/AFP/Getty Images
Learn more about our editorial standards.