Microsoft and OpenAI are no longer exclusive. Google is wiring $40 billion into Anthropic. China just vetoed Meta’s acquisition of Manus. Musk and Altman are in a courtroom deciding OpenAI’s soul.
This wasn’t a normal week in AI. This was the week the entire alliance structure of the industry collapsed simultaneously — and almost nobody is asking the right question.
Not “who wins?” but: who pays for the compute when nobody is exclusive anymore?
The End of the Captive Cloud
Let me be blunt about what the Microsoft-OpenAI divorce really means, because the press coverage is missing the point entirely. You’re reading headlines like “OpenAI gains cloud freedom” as if this is a liberation story. It is not. It is a desperate cash-flow maneuver dressed up as independence.
Here is the math nobody wants to say out loud:
OpenAI was paying Microsoft roughly 20% of its revenue under the exclusivity deal — that’s the famous revenue-share clause that both companies just renegotiated. What the April 27 announcement actually tells us is that Microsoft capped that share and made it independent of AGI milestones. Translation: OpenAI’s burn rate exceeded what the original deal could sustain, and Microsoft wanted off the uncapped liability train.
Think about what this means physically. OpenAI’s training workloads are deeply optimized for Azure’s infrastructure — custom InfiniBand topologies, specialized ND-series GPU clusters, Azure-specific networking. Undoing that to run on AWS Bedrock or GCP isn’t a flip of a switch. It is a multi-year, multi-billion dollar re-architecture. The OpenAI engineering teams that have spent four years optimizing for Azure will now split their attention across three clouds. Infrastructure efficiency — already terrible in AI — gets worse.
The real story is simpler: OpenAI needs more compute than Microsoft is willing to subsidize, and the revenue-share cap is proof that the financial model for exclusive AI partnerships is broken.
Consider the physical footprint. A single GPT-6 class training run consumes roughly 10,000-15,000 H100-equivalent GPUs running for 60-90 days. At $2-3 per GPU-hour, that’s $400-800 million per training run before a single inference token is served. Now factor in that OpenAI is training multiple generations simultaneously, running inference for hundreds of millions of users, and maintaining research compute for safety teams and fine-tuning pipelines. The total compute burn hits a rate that exhausts any single cloud partner’s willingness to subsidize.
Microsoft’s calculus is rational: why cap your own cloud’s most demanding customer when the revenue-share return no longer justifies the capacity reservation? The cap protects Microsoft’s balance sheet. But it also signals that even the richest company on earth thinks AI compute costs have crossed into unsupportable territory.
The $40 Billion Question
Then April 24 happened. Google committed up to $40 billion to Anthropic. Let that sink in relative to scale:
- Amazon invested $5 billion in Anthropic days earlier.
- Anthropic is now valued at $350 billion.
- For context: AMD’s entire market cap is roughly $250 billion. Intel’s is around $90 billion.
Anthropic — a company that didn’t exist five years ago — is now worth more than AMD. Not because of revenue. According to public filings, Anthropic’s 2025 annualized run rate was around $3-4 billion. At a $350 billion valuation, that’s a 100x price-to-sales multiple in an environment where the Fed funds rate is still above 4%.
The infrastructure implication is the part nobody is modeling correctly. Google’s $40 billion isn’t a check Anthropic can spend on anything. Look at the structure: the investment comes in the form of TPU capacity and cloud credits on GCP. Google isn’t giving Anthropic cash — it’s giving them shovels to dig in Google’s own mine. Anthropic will use Google’s TPU 8i chips, run on Google’s data centers, and pay Google back for the privilege.
This is not investment. This is vendor lock-in with extra steps.
Amazon’s $5 billion investment in Anthropic carries the same logic — Trainium chips, AWS capacity, the whole stack. Anthropic now has two major cloud patrons who are direct competitors with each other. The company is simultaneously optimizing inference for AWS Trainium, training on Google TPUs, and… well, what happens when those optimization paths diverge?
This is exactly the same dynamic Microsoft and OpenAI just escaped from, except Anthropic is doubling down on it from both sides. The multi-cloud strategy sounds like independence. In practice, it means being dependent on two landlords instead of one.
The hardware angle deepens the problem. Google’s TPU 8i and Amazon’s Trainium 2 are purpose-built ASICs with completely different software stacks. Optimizing a model for TPU requires JAX expertise and TPU-specific compilation passes. Optimizing for Trainium requires AWS Neuron SDK integration. Doing both simultaneously means maintaining two separate optimization pipelines — doubling engineering overhead for infrastructure that already consumes 60-70% of operational costs at frontier AI labs.
This isn’t just inefficiency. It’s a structural tax on multi-cloud AI that nobody has priced into their financial models. The hyperscalers know this — which is precisely why they structure investments as hardware credits rather than cash. Every dollar of TPU credit Anthropic spends is a dollar that cannot be diverted to AWS. Every Trainium credit from Amazon is a dollar that cannot buy Google TPU time. The investment structure itself creates the lock-in that the headlines pretend doesn’t exist.
Sovereignty Strikes Back — The Manus Precedent
The most underreported story of the week is April 27: China formally blocked Meta’s acquisition of Manus, the AI agent company founded by Chinese entrepreneurs that Meta bought for $2 billion in December 2025. Chinese regulators spent months investigating, restricted the co-founders from leaving the country, and finally told Meta to unwind the deal on national security grounds.
This is the first major sovereign veto of an AI cross-border acquisition, and it won’t be the last.
Manus is an interesting case because it’s not a model company — it’s an “agentic wrapper” that orchestrates Claude 3.7 Sonnet underneath. The technology itself isn’t cutting-edge AI research. But the capability — an agent that can autonomously browse the web, book travel, create applications, manipulate spreadsheets — is precisely what sovereign governments are now classifying as critical infrastructure.
Here’s what this means for infrastructure planning:
Every hyperscaler building data centers in Europe, Southeast Asia, and the Middle East is about to discover that “AI sovereignty” isn’t just a marketing term. It means physical requirements. Data localization. Model licensing restrictions. Hardware supply chain segmentation.
The Stargate project in the US — $500 billion over four years — assumes a globally fungible AI infrastructure where compute flows to wherever it’s cheapest. The Manus precedent says exactly the opposite: compute is about to become geopolitically balkanized. China won’t let American companies acquire Chinese AI talent. The US is blocking advanced chip exports to China. The EU is forcing Google to open Android to third-party AI assistants.
Every new data center built for AI needs to ask: which sovereign’s rules does this compute obey? Because the answer is no longer “all of them.”
Take the EU’s April 27 decision to force Google to open Android to third-party AI assistants. This sounds like a consumer-choice story. It is not. It is the first regulatory shot across the bow of AI platform lock-in. The EU is signaling that AI distribution channels — the app stores, the operating systems, the cloud marketplaces — will be treated as regulated infrastructure, not free markets. If Google’s Gemini must compete with third-party assistants on Android, then the cost of acquiring users for any AI service goes up, and the value of exclusive platform distribution goes down. That changes the entire unit economics of AI deployment.
The Trial That Decides the Checkbook
And then there’s the Musk v. Altman trial, starting April 27 in Northern California. I’m going to say something unfashionable: the legal arguments about OpenAI’s nonprofit mission are theater. The real question — the one that will determine the next decade of AI infrastructure — is who controls OpenAI’s compute budget.
If Musk wins and OpenAI is forced to remain a nonprofit or restructure, the funding mechanism for its compute collapses. OpenAI currently burns through cash at a rate that would make a small country nervous — estimates suggest $7-10 billion annually in compute costs alone. That cash comes from the for-profit arm that Musk is asking the court to dismantle.
No for-profit arm means no Microsoft investment means no Azure credits means OpenAI needs to find $10 billion a year in compute funding from… where? Venture capital? At $350 billion valuations? In this interest rate environment?
If Altman wins, the mission drift continues and OpenAI becomes a full commercial entity — which means the compute spending only accelerates. More GPUs, more data centers, more capital calls. The “AGI clause” that Microsoft just eliminated from their contract was the last guardrail on unlimited compute spending. With it gone, there is no mechanism to stop OpenAI from spending whatever it takes.
Either outcome means more infrastructure spending. The only question is who writes the check.
And there is a third, undiscussed outcome: the trial triggers an existential crisis that destroys OpenAI’s talent retention. Key researchers have already been jumping ship to Anthropic, Mistral, and independent labs throughout 2025 and early 2026. A messy public trial — with Musk deposing Altman on the stand, internal emails leaked to the press, and a judge parsing the definition of AGI in open court — accelerates that exodus. Talent is the one resource that no amount of compute credits can replace. If OpenAI’s research team fractures, the compute spend becomes irrelevant because there is nobody left to use it effectively.
This is the scenario the market is not pricing. OpenAI at $350 billion with its current team is one thing. OpenAI as a hollowed-out shell paying $10 billion a year for compute it cannot productively use is quite another.
Strategic Implication: The Fragmentation Tax
Let me connect the dots that nobody in the analyst community is connecting.
Seven days ago, AI infrastructure was organized around a simple model: exclusive alliances (Microsoft-OpenAI, Google-DeepMind-Anthropic, Amazon-Anthropic) with clear supply chains and single-cloud optimization.
Today, that model is dead.
OpenAI is multi-cloud. Anthropic is multi-cloud funded by two competing clouds. Sovereign borders are hardening around AI assets. The courts are rewriting corporate structures. The infrastructure assumptions that justified $500 billion in Stargate-level capex are now uncertain.
What replaces the old model? I see three scenarios:
Scenario 1: The Merchant Era (60% probability)
AI companies become cloud-agnostic software layers. Infrastructure becomes a commodity market where the marginal dollar of compute flows to the cheapest provider. This is great for efficiency but terrible for the hyperscalers who built their AI strategies around captive customers. Microsoft’s stock should be under more pressure than it is.
Scenario 2: The Balkanization Trap (25% probability)
Sovereign requirements and hardware specialization fragment the infrastructure market into regional silos. US models can’t run on Chinese chips. European models must use European data centers. Inference costs rise 40-60% due to redundancy requirements. This is the worst outcome for AI progress but the best for infrastructure builders who can operate in multiple regulatory regimes.
Scenario 3: The Winner Consolidates (15% probability)
One model — likely Anthropic given its dual-cloud funding and $350B valuation — achieves genuine breakaway performance and forces everyone else to run on its terms. Google and Amazon fall in line behind Claude. OpenAI becomes a footnote. This requires a level of technological dominance that I’m not convinced exists, but the investments suggest someone believes it does.
The Missing Piece: Hardware Specialization as a Trap
There is a deeper structural problem that all four of this week’s stories point to but never name: hardware specialization is becoming a trap, not a moat.
When Google invests $40 billion in Anthropic via TPU credits, it is making a bet that TPU 8i will remain competitive against NVIDIA’s next-generation Blackwell Ultra, AMD’s MI400, and whatever custom silicon Amazon, Microsoft, Meta, and Tesla are building. That is approximately six different hardware ecosystems all competing for the same training and inference workloads.
The problem is not that one will win. The problem is that the diversification itself destroys the optimization gains that justified the specialization in the first place.
Custom AI silicon makes economic sense only when you control the full software stack and have guaranteed utilization. Google’s TPU program works because Google controls the compiler (XLA), the framework (JAX), and the deployment (GCP). But if Anthropic is splitting its optimization across TPU and Trainium, neither platform achieves the utilization rates that justify the hardware investment. The fixed cost of chip design — Google reportedly spent over $2 billion on TPU 8i development — gets spread over lower effective volume.
This is happening at the worst possible time. The end of Dennard scaling and Moore’s Law means that each new hardware generation delivers smaller performance gains at higher cost. When hardware specialization stops compounding, the only way to improve AI performance is to spend more — on more chips, more power, more data centers. The entire industry is gambling that customized silicon will unlock the next order-of-magnitude efficiency gain. This week’s events suggest the exact opposite: fragmentation will dilute those gains before they arrive.
The Personal Verdict
I’ve been watching AI infrastructure for three years, and I’ve never seen a week this consequential that produced so little honest analysis.
The narrative is “AI is booming, investments are flowing, partnerships are evolving.” The reality is a structural breakdown of the financial model that made AI infrastructure investable in the first place.
Exclusive partnerships were never just about technology — they were about risk allocation. Microsoft took Azure credit risk so OpenAI could focus on models. Google fronted TPU capacity so DeepMind and Anthropic could train without building their own clusters. These arrangements worked because they were exclusive — the hyperscaler knew its capacity investment would be utilized.
Now exclusivity is dead. OpenAI shops around. Anthropic plays Google against Amazon. And the infrastructure builders are left guessing: who will actually pay for the $500 billion in data centers we’re breaking ground on?
The honest answer, as of April 28, 2026, is: we don’t know.
And that’s the most terrifying sentence in AI infrastructure today.
— A Maverick Analyst Perspective
Data sources referenced: Bloomberg (Google-Anthropic $40B), Ars Technica (Microsoft-OpenAI amended agreement, China-Manus blockade, Musk-Altman trial), WSJ (Manus co-founder restrictions), OpenAI/Microsoft joint announcements April 27, 2026.