Yesterday I decided to dig into the CoreWeave’s S1. When the company called me in 2022, I’d shrugged it away. After a few of my colleagues ended up joining the company over the last few months, I’d grown more curious to learn what they’re about…
I broke down my analysis into the following parts:
- Financial Health
- Market incl Opportunity and Growth Drivers
- Operating Plan
- Leadership & Team
- Risks
1. Financial Health
On the positive side, CoreWeave’s (CRWV) business offers a window into the economics of committed multi-year cloud contracts. Upfront and recurring payments from customers can provide a nice, steady cash stream over multiple years. CRWV has used these customer contracts as collateral to raise more capital, at lower cost. Lower cost of capital has allowed them to acheive economies of scale to certain degree. But they’re running out of headroom.
Incumbent hyperscalers used cash flows from their existing in-house businesses to fund their cloud infrastructure build out. CRWV has used contracts with ‘investment grade’ or ‘IG’ customers (e.g. Microsoft) to raise debt to fund their infrastructure build out. The company achieved nearly $2B in revenue in 2024, carrying $8B in debt, with financial covenants that limit diversification of their customer base. This is because debtors want customers to be primarily IG. Their leverage ratio is about 5.4x, and can be no higher than 6.0x for certain credit facilities. And each 100bp change in interest rates would increase their interest expense by $31M (about 9% of their net interest expense in 2024). The table below breaks down their current debt. It’s a sight to behold.
Debt Instrument | Drawn ($, M) | Available ($, M) | Effective Interest Rate | Covenants |
---|
DDTL 2.0 Facility | 3,787 | 7,600 | 11% | Coverage ratio of 1.4x or more; non-IG collateral and revenue ratio of 0.35 or less; Min liquidity of 2% of loan principal (after IPO: greater of $25M or 1% of loan principal) |
DDTL 1.0 Facility | 1,976 | 2,300 | 15% | Min liquidity of $19M or 4% of loan principal (after IPO: $25M or more) |
Term Loan Facility | 985 | 1,000 | 12% | Min $1B in contracted revenue (1x for IG customers, 0.75x for non-IG customers); Max 6x leverage ratio |
Revolving Credit Facility | - | 650 | NA | Min $1B in contracted revenue (1x for IG customers, 0.75x for non-IG customers); Max 6x leverage ratio |
OEM Financing | 1,177 | | 9-11% | |
Total | 7,925 | 11,550 | | |
As public markets invite more overhead and constraints, IPOs have become the last resort for a healthy, growing, and profitable company (exhibits: Stripe, Databricks). High debt levels in uncertain economic times are risky, which makes the timing of this CRWV IPO all the more interesting. And if further debt is financially untenable, an IPO would indeed be their next best option. Is it also possible that encumbering the company with further onerous private equity terms might strangle its IPO potential forever?
2. Market
CRWV cites Bloomberg Intelligence in sizing their market opportunity for “AI” workloads at about $400B, across training, fine-tuning, and inference. They see their opportunity across the stack as defined by ‘traditional’ hyperscalers, including Infrastructure, Managed Services, and Application Services. Infrastructure accounts for a majority of their revenue today, and includes compute and networking resources (comparable with Amazon EC2). Managed Services provide K8S-based services (comparable with Amazon EKS/ECS). Application Services include SUNK (Slurm on K8S), Tensorizer, and Inference/Optimization (comparable with Amazon SageMaker and Bedrock).
Opportunity: “Purpose built” for AI is their stated differentiation, citing delivery of more compute cycles for higher performance than ‘generalized’ cloud providers. However, their primary exhibit is from June 2023:
In June 2023, our NVIDIA H100 Tensor Core GPU training cluster completed the MLPerf benchmark test in 11 mins, 29x faster than the next best competitor at the time of the benchmark test
They cite the difference between observed MFU (Model FLOPS Utilization) of 35-45% compared to the theoretical 100% as a significant opportunity for unlocking AI infrastructure performance potential and therefore the improvement of quality of AI overall. While general purpose clouds created over a decade ago and were designed for general purpose use cases such as search, e-commerce, generalized web-hosting, and databases, they claim to have reimagined traditional cloud infrastructure for AI. Their primary exhibit here is the removal of the hypervisor layer (and other unnecessary managed services that cause performance leakage). They claim this allows them to deliver 20% higher MFU than general purpose clouds. In my experience, most sophisticated customers can achieve 40-50% MFU as they’re fairly capable of optimizing every dimension of their training environment… but who am I to say?
They add that delivering this performant AI infrastructure is immensely challenging, which I totally believe. However, I found little in their description that suggests how it’s different or ‘reimagined’ from traditional cloud infrastructure:
Depending on the configuration of the data center, a single 32,000-GPU cluster may require the deployment of approximately 600 miles of fiber cables and around 80,000 fiber connections. Acquiring the necessary high-performance components requires managing a complex global supply chain, and configuring and deploying those components in data centers requires deep operational experience. The data centers themselves need to be specifically designed for high-performance compute, which requires specialized heat management capabilities such as liquid cooling, with heat exchangers and subfloors to support high density racks and high power supply per rack.
Growth Drivers: Two dimensions have definitely worked in their favor. One is speed, the second is power.
Doing as little as possible is a brilliant way to be fast. They source GPUs exclusively from Nvidia. OEMs assemble it for them, and they ‘deploy the newest chips in our infrastructure and provide the compute capacity to customers in as little as two weeks from receipt from our OEM partners such as Dell and Super Micro.’ They also lease their data centers. They do automate provisioning and testing the infrastructure, ‘ready for consumption within hours of customer acceptance.’
In short, CRWV acquires the financing and project manages the conversion of Nvidia’s GPUs into a usable product, data center compute with power. There’s plenty evidence that demand has far outstripped supply of foundation models these last couple of years while hyperscalers have been power constrained. While hyperscalers have been power constrained, CRWV has helped Nvidia shrink the gap between demand and supply by finding scarce power.
The key question is: Will CRWV build sufficient value faster than the industry closes this gap?
On a slight tangent… in addition to rapidly deploying Nvidia’s GPUs and getting these into the hands of end customers faster, CRWV also helps Nvidia diversify their customer base, presumably beyond the incumbent hyperscalers?
This raises another question: Will CRWV be the undifferentiated cloud providing Nvidia’s GPUs while the other three differentiate by vertically integrating with their own chips?
3. Operating Plan
Business Model: CRWV’s average cash payback period on their GPU infrastructure is expected to be about 2.5 years. Assuming contracts continue to range from 2-5 years as they do today, they expect to profit from the residual value of the GPUs past the contract length. This assumes the GPUs are still worth using beyond the contract life and haven’t died from inherent failures and high-pressure workloads. Anyway, the faster the GPUs depreciate, the faster they can be replaced with newer, higher performing ones, and the cycle repeats. It’s not clear if this is a good business model itself, unless there’s more value for customers. This explains the push for Managed and Application services, especially if these can be designed for AI workloads rather than be redesigned to accept AI workloads. This strategy makes sense, but their execution becomes thinner as it goes up the stack.
More than seemingly lucrative committed contracts (96% of revenue in 2024), more interesting is the quality and nature of their customer base: their top customer, Microsoft, accounted for 62% of revenue in 2024. Payments from Microsoft would definitely qualify as IG. Their second largest customer accounted for another 15%. They mention other customers such as Cohere, Mistral, Meta, IBM, Replicate, and Jane Street (also an investor), but if Microsoft drops their commitment would they be able to maintain at least $1B in contracted revenue (for financial covenants)? Not to worry! OpenAI is stepping in to purchase a five year $11.9B contract, supposedly taking a stake in the company to boot. Maybe this is a better outcome, if Microsoft was just a front for OpenAI?
OpenAI expects to make $11.6B in revenue by burning through $14.4B in 2025, some of it presumably going to CRWV now. Such an irrational investment may itself appear as a deterrent to others, limiting CRWV’s potential customer base to model builders who feel compelled to lose money for years. Focusing just on OpenAI, if their usage grows per plan, would they take Microsoft’s capacity over CRWV’s? Or does CRWV come into play only if OpenAI grows beyond expectations and Microsoft is unable to meet their demand? Or they come into play if Microsoft intentionally wants to diversify its capacity across other model families, and hold back from OpenAI?
Combined with the economic uncertainty around interest rates, uncertainty around competitors and customers compounds CRWV’s operating risks. But Nvidia and OpenAI really want this tenuous link to exist, at least for now! The IPO adds some overhead, but likely increases its chances of survival. With the new capital, CRWV plans to:
- Capture more workloads with existing customers, mostly capture OpenAI’s?
- Expand to more customer segments, though non-IG customers must be limited to 35% of revenue?
- Expand internationally, though it’s unclear if they can repeat their growth playbook in other regions.
- Integrate vertically, though would it make sense for sophisticated customers such as OpenAI to use their vertically integrated services?
CRWV’s biggest advantage is that they have nothing to lose, and no prior services or operations to shutdown. They can redefine what cloud infrastructure means for AI workloads. Their vision of specializing around AI can be disruptive, but despite their claims of a unique ‘corporate culture’, it’s unclear if they can realize this advantage in a meaningful way.
4. Leadership & Team
Co-founder, President, and CEO, Michael Intrator, was previously co-founder and CEO of Hudson Ridge Capital Asset Management LLC, a natural gas hedge fund. Chief Strategy Officer, Brian Venturo, was also a Partner at the same fund. Both have prior experience overseeing investments in energy products. Their understanding of energy related risks, financial engineering, andthe hustle to bring it all together in such short time is quite stunning.
Despite all the financial savviness, their oversight of material weakness in financial reporting processes, systems, personnel, and controls doesn’t inspire confidence.
The material weaknesses identified pertained to the lack of effectively designed, implemented, and maintained IT general controls over applications that support our financial reporting processes, insufficient segregation of duties across financially relevant functions, and lack of sufficient number of qualified personnel within our accounting, finance, and operations functions who possessed an appropriate level of expertise to provide reasonable assurance that transactions were being appropriately recorded and disclosed. We have concluded that these material weaknesses existed because we did not have the necessary business processes, systems, personnel, and related internal controls.
In all fairness, this is fixable. But I suppose this IPO happens now, or there’s a risk that it never happens. It’s also convenient for the investors, who would love a timely exit. While the founders have cashed out large amounts over the last couple of years, employees can also get a ride through the IPO. But they’ll be limited by lock in periods.
At least one investor isn’t limited by lock in periods: Coatue has a put option that entitles them to receive cash equal to the OIP (Original Issue Price) of their Series C shares plus any accrued dividends. Alternatively, they can choose to sell their converted shares at any time after the IPO without being subject to a lock in. Their put option expires only if the stock trades at or above 175% of the Coatue’s Series C OIP during any consecutive 30 trading day period, which would allow them to sell their shares at an attractive price. Did I say how convenient this is?
5. Risks
Not mentioned yet, but CRWV also bears the risk of vendor concentration, with top three suppliers accounting for 46%, 16%, and 14% of total purchases. Again, it’s fixable, though they already carry OEM financing liabilities from these existing vendors.
This is already getting long, so let’s set aside what the regulators might throw in.
To summrize, in addition to indebtedness in an economic uncertainty, customer concentration in a fast evolving market, weak financial controls, and vendor concentration, the company operates in a market with three entrenched, well capitalized, and fierce competitors. One might say that these traditional hyperscalers see winning AI to be an existential risk.
CRWV will certainly be an adventure. Not for the weak hearted, if you’ll allow me to add…