In Part 1, we explored growth benchmarks for high-liquidity private SaaS companies. This time, let’s shift our focus to the impressive yet complex growth of liquid private AI companies operating at the Model and Application layer.
These five companies exhibit high liquidity in the secondary market, trading at premiums compared to their primary rounds, and demonstrate hyper-growth in valuations.
The revenue growth figures for AI companies are truly remarkable.
Of course, evaluating growth in isolation isn't ideal, but these companies typically don’t disclose much beyond basic numbers. Investors still face a lot of uncertainty when making these bets. Some argue that for strategic investors—particularly Big Tech, which drives these sky-high valuations—specific numbers aren’t as crucial. They operate under the mindset that “overinvesting is better than underinvesting”.
For other investors, Martin Peers likens investments in companies like OpenAI to a gamble. In The Briefing newsletter, he wrote, "OpenAI’s losses aren’t going away anytime soon. And even assuming it eventually makes money, we have no idea what its profit margins will look like or how much OpenAI will need to raise before it gets to the break-even point. So we have no idea what the likely return on investment is."
Indeed, investors face a lack of transparency, rapidly evolving business models, unpredictable costs, shifting consumer behaviour, fierce competition—you name it. This makes assessing the true potential and long-term value of these companies exceptionally challenging.
Moreover, growth in AI doesn't follow traditional linear or exponential trajectories. Instead, AI products tend to grow in S-curves. As Andreas Goeldi explains: "Humans struggle to intuitively grasp S-curves, as they are rare in nature. Overlapping S-curves, which influence each other, are even harder to understand. The AI industry is currently witnessing a chaotic mix of different S-curves that overlap, move at varying speeds, and either amplify or cancel each other out."
Still, we want to make sense of available data and dig deeper into this growth.
Where Projected 2024 Revenue is Headed and the Role of Key Revenue Streams
1. Access to Models
For model builders, access to models remains the largest revenue driver. However, fierce price competition is making this business model increasingly precarious. Aidan Gomez, CEO of Cohere, highlighted this issue, stating that selling access to models is quickly becoming a "zero margin business." On a recent episode of the 20VC podcast, Gomez explained that while demand for AI models is growing, the financial returns aren’t keeping up due to aggressive price cuts.
“If you’re only selling models, for the next little while, it’s gonna be a really tricky game,” said Gomez. OpenAI, Anthropic, Google, and Cohere all sell API access to their AI models, but they face the same challenge: price dumping. "It’s gonna be like a zero-margin business because people are giving away the model for free. It’ll still be a big business... but the margins, at least for now, are gonna be very tight."
2. B2C Applications
Consumer-facing AI applications present another tough revenue stream, as most users aren’t willing to pay for the service. A poignant example from a Roon tweet illustrates this well:
“The average price of a Big Mac meal, which includes fries and a drink, is $9.29. For two Big Mac meals a month, you get access to incredibly powerful machine intelligence capable of high-tier programming and PhD-level knowledge.”
This highlights the disparity in value—consumers are getting massive value for minimal cost. As of now, 99% of the value generated by large language models (LLMs) is captured by the consumers, while companies struggle to monetize effectively. As Roon points out, “What OpenAI, Anthropic, and Google are doing is as close as you can get to giving away the product for free.” The challenge here is determining how consumer willingness to pay will evolve and how players will reduce the cost structure to make this revenue stream sustainable.
3. Enterprise Applications
Enterprise applications represent the most exciting revenue stream, with immense growth potential. The competition here is fierce as private AI companies contend with established Big Tech players and SaaS incumbents, who already have deep customer relationships. Everyone is eagerly awaiting clearer calculations on the ROI of enterprise AI solutions. This space holds the most promise for significant revenue, as companies look to integrate AI to boost productivity and streamline operations.
Let’s follow with adoption and engagement metrics for application level
AI sector lacks a standardized set of metrics to measure adoption. Different companies and investors use a variety of engagement metrics to evaluate the traction of their products:
OpenAI, August 2024: ChatGPT now has more than 200 million weekly active users. That's twice as many users OpenAI reported less than a year ago.
CharacterAI: Andreessen Horowitz (a16z), the lead investor, reported in March 2024 that on average users have 300 sessions per month and spend 2 hours a day in Character.AI (probably, they are measuring engagement for power users because the data from Similarweb doesn’t match)
Glean, September 2024: Glean Assistant users average 5 queries per day, about as often as people typically search the web with Google. And across Glean’s customer base, we are seeing an average of ~40% DAU/MAU (again, probably for power users)
Perplexity, August 2024: 230M+ queries a month
Microsoft, July 2024: The number of people who use Copilot for Microsoft 365 daily at work nearly doubled quarter over quarter.
We can standardize at least for B2C products:
Indeed, Character.ai has outstanding engagement.
And compare the audience of Enterprise products:
Monetization metrics
Now this is an exciting one, players are trying to dramatically increase prices right now and soon we will see what is the willingness to pay:
Metrics for now:
Character.ai again has outstanding performance, this time appears to be undermonetisized
Enterprise products now cost $30-60 per seat per month.
Last, but probably most important, spending metrics
Unlike SaaS, where spending is primarily driven by predictable acquisition and labour costs, AI spending is both crucial and highly unpredictable. AI chatbots don’t enjoy the same economies of scale that make traditional software profitable. Each query incurs costs for generating a response, known in AI as inference. The newly released OpenAI o1 model, for instance, is expected to be even more costly to run, as it requires more complex reasoning for each user query.
Analysts assess AI spending, with The Information suggesting that OpenAI could be spending $5 billion a year and Anthropic around $2.7 billion. However, these are short-term guesses at best.
The real revolution lies in methods for reducing computation costs—through both algorithmic improvements and hardware acceleration. This is where the focus should be moving forward.
As competition intensifies in the AI applications space, we expect companies to develop more transparent metrics for computation costs and make this data more accessible to investors and stakeholders.