The fallout from DeepSeek‘s release of its r1 reasoning model is still roiling the industry, with the Chinese AI company at the forefront of most VC and AI industry conversations.
Chipmaker Nvidia’s share price is still around 15% below the all-time high reached in early January before DeepSeek revealed its model. At the same time, concerns for the valuations of foundational model companies like OpenAI and Mistral AI linger.
So far, much of the narrative around r1 has been focused on how the low-cost, more efficient model threatens to undermine the billions of dollars that investors have plowed into the foundational segment—$15.7 billion globally in just the first three quarters last year, according to PitchBook data.
But while DeepSeek’s technology is undoubtedly bringing significant disruption to rival large language models (LLMs), it also unlocks enormous opportunities for the application layer that could catalyze a new wave of AI startups.
Democratizing the AI landscape
AI application startups that offer products and services using existing AI models are raising substantial funding. Offering fast paths to revenue, greater scalability and lower capital requirements, these startups were likely to benefit from greater VC appetite this year. The arrival of DeepSeek is expected to turbocharge that growth.
Peter Wing, founding partner of Wing Venture Capital, wrote in a blog post, “Market dynamics are changing, with applications and software layers gaining prominence. Investors may see the potential for higher margins and more sustainable value in the application layer, where AI can be directly integrated into business processes and user-facing products.”
One of the main challenges facing AI application startups is the high cost of accessing or running LLMs. They either face substantial API access fees or need to build up massive computing power to train proprietary models. Being substantially cheaper, DeepSeek will lower the barriers to entry for new startups.
DeepSeek’s models are designed to perform at the same level as other advanced LLMs while requiring less computational power. This comparative performance translates into lower operational costs for startups running AI applications at scale, enabling them to bring AI-powered products to market more quickly.
Smaller startups with a fraction of the resources of tech giants and incumbents will be able to compete meaningfully and invest more capital in other areas like product development or customer acquisition.
The benefits are even clearer for startups outside the US. European AI startups, for example, often cite a lack of capital as the main barrier to scaling. Using a cheaper model levels the playing field for those with less access to funding.
Another key advantage of DeepSeek’s model is its open-source nature, allowing for greater data customization for specific use cases.
This is particularly beneficial for vertical AI startups—companies building products for specific industries—which can tailor applications with greater precision, making them more relevant to customers.
A boon for returns
DeepSeek’s ability to lower costs and improve efficiency for AI application startups fundamentally shifts the economics of AI entrepreneurship, making it even more attractive for VCs.
With a more affordable AI model, startups can achieve a similar performance at a lower cost, meaning VCs can deploy capital more efficiently when they invest. Milestones can be reached with less funding, reducing dilution for early investors and increasing the likelihood of strong exit multiples.
A shorter time to market increases revenue, which in turn should improve investors’ overall return potential. This increases the appetite for investments in AI applications.
Lower barriers to entry also create more investment opportunities. Faster product development cycles and lower burn rates should translate into higher valuations that are advantageous for early investors.
“Startup margins will surge,” Theory Ventures general partner Tomasz Tunguz explains in his blog. “As AI performance per dollar skyrockets, startup economics will fundamentally improve. Products become smarter while costs plummet. Following Jevons Paradox, this cost reduction won’t dampen demand—it’ll explode it.”
Of course, a more democratic landscape could make the space crowded, making it difficult for startups to build a defensible business. As Hustle Fund general partner Elizabeth Yin writes on her blog, “If 200 startups are building the same AI-powered tool, it’s hard for one to achieve dominance—and hard for a VC to get a 100x return.”
There are also concerns about DeepSeek’s security and how much of the data it collects is accessible to the Chinese government. An evaluation by AI compliance company LatticeFlow AI found that the model has significant cyberattack vulnerabilities.
Even with the security risks, startups are considering switching to r1. AI voice generator ElevenLabs announced that it has integrated the model into its products, and AI video communications platform Synthesia is currently experimenting with the technology.
The global AI race has now turned to efficiency, and moving forward, AI models will have to become more cost-effective and efficient to compete.
Researchers at Stanford and the University of Washington published a research paper showing their ability to train an AI reasoning model for under $50 in cloud compute credits—prepaid currency for cloud resources.
While foundational model companies battle it out for dominance, AI application startups are going to grow faster and faster, hopefully with stellar results for their backers.
Featured image by Chloe Ladwig/PitchBook News
Learn more about our editorial standards.