$11.5 B Says The AI Model Isn't The Hard Part
OpenAI and Anthropic just bet $11.5B that the bottleneck for enterprise AI isn't model quality — it's deployment, talent, and the data context layer.
OpenAI and Anthropic just bet $11.5B that the bottleneck for enterprise AI isn't model quality — it's deployment, talent, and the data context layer.
Earlier today, within minutes of each other, OpenAI and Anthropic announced joint ventures with the world's largest private equity and financial institutions. These joint ventures will provide services to deploy AI models by embedding their engineers directly inside the enterprise.
The two most valuable AI labs on the planet have concluded the same thing at the same time: The model quality is already good enough, but the capital, talent, processes and systems needed to make these models useful in the enterprise are severely lacking.
Palantir introduced the Forward Deployed Engineer (FDE), now the AI Labs want to industrialize it. Its forward-deployed engineer model of sending engineers to live inside client organizations, building systems on real data against real workflows drove Palantir's U.S. revenue up 104% YoY last qtr, and led to 2026 revenue guide raised to $7.66 billion.
The FDE model worked for Palantir and its customers. OpenAI and Anthropic are now betting $11.5 billion combined that it works at 100x scale.The question for every enterprise CTO reading this: what does it mean when your AI vendor wants to live inside your org?
OpenAI launched "The Deployment Company" (DeployCo), a $10 billion Delaware-domiciled JV anchored by TPG and backed by 19 investors including Brookfield, Advent International, and Bain Capital. PE investors committed over $4 billion. OpenAI is contributing up to $1.5 billion, but retains strategic control.
.png)
The most revealing detail: OpenAI is guaranteeing investors a 17.5% annual return floor over five years. That is OpenAI paying for distribution. It converts a slice of its growth into a fixed-yield instrument that PE firms can underwrite like a credit fund. In return, the PE firms open their portfolio companies as a captive customer base.
Anthropic announced its own AI-native enterprise services firm, backed by approximately $1.5 billion in committed capital. Founding partners are Blackstone, Hellman & Friedman, and Goldman Sachs anchoring at $300 million each. The firm is a standalone entity with Anthropic engineering resources embedded directly within it, targeting mid-market companies.
OpenAI's structure is a volume play: bigger capital, more aggressive capitalization, a wider PE consortium, and a guaranteed return designed to lock in distribution at scale. Anthropic's is a credibility play: smaller, anchor-investor-led, trading on the prestige of Goldman and Blackstone, and targeting mid-market companies where Claude's enterprise lead is already the strongest.
Let’s start with the numbers and why this is happening now.
Enterprise AI investment tripled in a single year, from $11.5 billion to $37 billion, according to Menlo Ventures' 2025 report. U.S. enterprise AI spending is projected to grow to over $1.75 trillion by 2030, per Oxford Economics. Gartner projects AI-driven spending at $2 trillion in 2026.
Yet, Deloitte's 2026 survey found only 20% of organizations are seeing actual revenue growth from AI, despite 66% reporting productivity gains. And CIO.com's 2026 survey found that 40% of IT teams named lack of in-house talent as the top challenge. IDC estimates that skills shortages may cost the global economy up to $5.5 trillion by 2026.
The gap between spending velocity and value realization is the widest in modern enterprise technology. The bottleneck isn't models. It's how the models are deployed and utilized in the industry. These JVs are an admission of that reality and their structure reveals how the AI go-to-market is fundamentally changing.
First, the buyer isn't the enterprise, it's the financial sponsor or PE owner. In a traditional enterprise sale, you are selling to a CTO or CIO who runs a procurement process, evaluates vendors, and picks the best fit. Here, that entire diligence process is bypassed as the PE firm mandates top-down adoption of the AI model across its portfolio.
The labs are selling model + engineers + implementation + ongoing updates as a single package. This is designed to make comparison shopping impossible. When an Anthropic team has spent six months rebuilding your clinical documentation workflow, you don't think of it as "using Claude," instead you think of it as "having an AI team."
OpenAI offering 17.5% guaranteed means the PE firms are monetizing their distribution leverage and taking some deployment risk off the table. The labs need distribution more than the PE firms need any particular model. If model quality were truly decisive, OpenAI wouldn't need to pay for distribution, the product would pull itself through.
For 20+ years in enterprise software, I watched customers spend six dollar in services for every dollar of software that we sold. That ratio worked for traditional enterprise software, and transitioned to the SaaS industry. It built a trillion dollar IT consulting industry, a territory that the AI labs are now explicitly claiming.
But there is a problem with this model in deploying AI. When you hired Accenture to deploy SAP, the next version of SAP was at least 5 years away. And SAP wasn't about to become a commodity. The software was the moat. In AI, the models leap frog each other every 2-3 months. In fact, the models are converging toward commodity faster than any enterprise software category in history.
If you're a CEO or CTO at a PE-backed mid-market company, your owner just chose your AI vendor for you. You need to understand what you get in return for giving up model choice.
The model leaderboard shifts every two to three months. Anthropic held 12% of enterprise LLM market share in 2023. By end of 2025, it held 40%, and OpenAI dropped from 50% to 27%. Google climbed from 7% to 21% in the same time frame. Open-source models are literally 6 months behind the frontier models. This is the fastest market share reversal in enterprise software history.
.png)
The JV structure doesn't protect you against model disruption. If a new entrant leapfrogs the incumbent, whether it's Groq on inference speed, Meta on open-source (could happen), Gemini because you switched to GCP, or a model lab that doesn't exist yet, you are entangled in an implementation built around the wrong future model.
OpenAI's guaranteed return makes this worse for them. If portfolio companies start resisting adoption because a better model exists, DeployCo's revenue underperforms, but OpenAI is still on the hook for the 17.5% floor. The cost of distribution goes up while the value of what's being distributed goes down.
And there's Google that is sitting on Tranium chips, frontier models (Gemini), cloud infrastructure (GCP) that grew at a nosebleed 63% last qtr, enterprise relationships and a consulting arm. Google doesn't need a PE or a JV. They already have the model and the distribution channel. If Google decides to offer a similar forward-deployed model at a lower price point, cross-subsidized from search or cloud revenue, both JV distribution models look expensive overnight.
The critical distinction every enterprise leader should internalize: there is a difference between implementation lock-in and model lock-in. The labs are building the former and hoping you confuse it for inevitability.
Every forward-deployed engineer engagement starts with the same problem. The model needs to reach the enterprise's actual data that sits in distributed ERP systems, data warehouses, log files, CRMs and HRIS platforms. The access to this data needs to be governed, scoped, and secured not just for the humans that have access to it, but also the AI Agents that you will build. The labs are selling the model, the engineers and the implementation, but they are not solving the data connectivity and context layer problem.
Here's what most enterprises miss: the context layer that defines where your data sits, reflects how decisions get made in your company, how your data is connected, structured, scoped, and made available to AI, is just as strategic (if not more) as the data itself.
Context is what tells the model which tables matter, what permissions apply, how your business logic works, and what the relationships between your systems actually are. Without context, a frontier model is just an expensive autocomplete engine guessing at your schema. With context, it becomes an operational tool that runs your business at warp speed.
If that context lives with the model vendor, baked into their custom code, their model harness, their engineers' institutional knowledge, you are locked-in. The model is not static, it continually changes and evolves with changes in your schema, decision matrices and business processes. Swapping models means ripping out not just the model, but everything that connects it to your business.
If context lives with you, in a governed infrastructure layer that you own and control, you can swap models without ripping out your entire implementation. The model becomes a replaceable component. Your data and context layer becomes the durable asset.
Gartner reports that 85% of AI projects fail due to poor data quality or lack of relevant data. The 2026 AI Infrastructure Report found that 98% of organizations cite skills shortages in data infrastructure as a major barrier to scaling AI, and 72% rely on third-party expertise. The data connectivity problem is the central issue. And it's the one that neither JV is designed to solve at the infrastructure level.
Don't get me wrong, I believe these JV's will deliver real AI value at scale and accelerate AI adoption industry wide, because the talent is real and the speed of execution is real. However, it is up to you to ensure that your AI deployments are architected to retain context as a strategic asset and preserve model optionality.
The model is the commodity. Your data and context layer is the asset. Every implementation decision should be evaluated through the lens that when this engagement ends, or when a better model arrives, how much do I have to rebuild?
Ask your AI vendor one question: when this engagement ends, what do I own? If the answer is unclear, if the context, the connectors, the workflow logic all live in the AI Labs' model harness, then you are renting, not building. You'll pay again when the next model cycle comes.
The labs are industrializing the forward-deployed engineer to accelerate AI model adoption, as they should. Your job is to make sure the most strategic layer stays on your side of the wall. Would love to hear your thoughts on how we can collaborate as an industry to build a data and context layer that balances out the risk and reward equation for you.

