Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: SoftBank plans to list a new AI and robotics company in the US…AI model’s goblin habit, explained…Putting Google’s AI to the test as a trip planner.
If Big Tech’s AI spending spree were like climbing Mount Everest, they would still be ascending toward the summit, getting dizzy from the altitude.
In quarterly earnings, estimates from Alphabet, Amazon, Meta and Microsoft put combined capital expenditures at more than $130 billion for the quarter, driven by buildouts of data centers and other infrastructure. That spending could surpass $700 billion this year, up sharply from about $410 billion last year. While only Alphabet has explicitly pointed to further increases beyond this year, all four companies signaled sustained high levels of investment as demand for AI infrastructure continues to grow.
The market reaction has been mixed. Shares of Meta fell sharply after its earnings report as investors focused on the scale of its AI spending plans, and Microsoft also slipped. By contrast, Alphabet and Amazon rose on strong cloud growth—highlighting a growing divide on Wall Street over whether this buildout is justified or getting ahead of itself.
There’s no doubt that AI companies—from the hyperscalers to startups like OpenAI and Anthropic—are hungry, if not starving, for more computing power. The scale of today’s AI systems, which require far more hardware, energy, and coordination than earlier generations of software, means that more is almost never enough. The result is a surge in spending unlike anything the industry has seen before: : McKinsey research from last year found that by 2030, AI capex is projected to require $6.7 trillion worldwide to keep pace with the demand for compute power.
Spending big on physical infrastructure
It’s important to understand how much of that spending is going directly into the physical infrastructure that supports AI—both training frontier models and running them. But it can be hard to wrap your mind around the scale of this buildout.
It starts with chips—the specialized silicon semiconductors designed to perform the calculations used in AI. A single GPU from Nvidia, for example, can cost up to $40,000. But companies don’t buy them one at a time; they buy systems. An eight-GPU server can cost hundreds of thousands of dollars, and the clusters needed for hyperscale AI data centers—made up of thousands or even hundreds of thousands of GPUs—can run into the billions.
Then there are the data centers that house and power those systems. Pack tens or hundreds of thousands of GPUs into a cluster of buildings spread across hundreds or thousands of acres, and the result starts to look less like a traditional tech investment and more like a utility-scale project—consuming as much electricity as a small city. Last month, I looked closely at Meta’s $27 billion Hyperion data center project in northeast Louisiana, which some estimate will use millions of GPUs.
Another key piece is networking—the cables and switches that connect thousands of chips so they can work together. Training and running modern AI models requires constant, high-speed communication between machines, using specialized switches, fiber optic or ethernet connections, and network cards. Without that, even the most powerful chips can’t do much.
Not everyone agrees spending will keep climbing
Not everyone is convinced the spending will keep climbing. Some investors and analysts see it as a gamble, warning of a potential overbuild in which companies pour money into infrastructure that runs too far ahead of demand. There are still plenty of headlines predicting an AI “reckoning.” And as my colleague Shawn Tully has pointed out, the fast-depreciating nature of AI hardware means that there are even greater costs coming down the pike.
But this AI spending race is now in its third year and still shows no signs of slowing. In 2024, the combined capex of the four biggest hyperscalers was just over $200 billion. Two years later, it’s on track to approach $700 billion.
If this is a climb, there’s still no clear view of the summit.
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
Microsoft, Meta, and Google just announced billions more in AI spending. Only Google convinced investors it’s paying off – by Amanda Gerut
Half of Google’s and Amazon’s ‘blowout AI profits’ came from a stake in Anthropic—not from their actual business—by Eva Roytburg
AWS CEO Matt Garman sees huge business opportunity for Amazon in AI-powered software: ‘Everything is going to be remade’ – by Alexei Oreskovic
China’s decision to block the $2 billion Meta-Manus deal shows how far Washington and Beijing are drifting apart over AI – by Nicholas Gordon
AI IN THE NEWS
SoftBank plans to list new AI and robotics company in the US. The Financial Times reported that SoftBank Group is reportedly preparing to spin out and take public a new AI and robotics company called “Roze,” targeting a valuation of up to $100 billion in what would be one of the largest AI IPOs to date. The venture is expected to focus on the physical buildout of AI infrastructure—using robotics to help construct data centers and bundling together SoftBank’s existing bets in energy, land, and digital infrastructure—as CEO Masayoshi Son doubles down on “physical AI” as the next frontier. The IPO could come as early as the second half of 2026, part of a broader effort to capitalize on surging investor demand for AI while also helping SoftBank manage its massive financial commitments, including tens of billions invested in OpenAI and other large-scale infrastructure projects.
AI model’s goblin habit, explained. After questions arose about the odd tendency of OpenAI models to reference goblins, gremlins, and similar creatures, the company put out a blog post today acknowledging the problem and saying that it wasn’t random but a side effect of how the models were trained. The behavior first appeared after the GPT-5.1 launch, when the reinforcement learning process used to create the model’s “Nerdy” personality mode—one of several distinct personalities OpenAI began offering users with the roll-out of that model—rewarded whimsical metaphors, including those specifically referencing the mythical creatures. The way this reinforcement learning process works, the linguistic tic seeped into other model personality types too. Even after the Nerdy personality was removed, the habit persisted in later models like Codex because training had already baked it in. The episode is a small but telling example of how subtle reward signals can shape model behavior in unpredictable ways.
Putting Google’s AI to the test as a trip planner. I’m always interested in how AI is progressing in its ability to help with travel plans. In a New York Times column, author Brian X. Chen put Google’s Gemini to the test. He found that AI is getting meaningfully better at handling complex, multi-step tasks like trip planning—but still falls short of full autonomy. Gemini’s integration with Google services like Flights, Hotels, Gmail, and Maps allows it to act as a kind of “AI travel agent,” quickly generating itineraries, packing lists, and personalized recommendations that saved significant time and effort. But the system remains inconsistent: it made basic errors (like omitting essentials from packing lists) and struggled with real-time context, such as confusing locations across different legs of a trip. The takeaway remains: AI models are useful, but still require human oversight, particularly when context, timing, and accuracy really matter.
EYE ON AI NUMBERS
75%
That’s how many tech leaders agree that their operating models and processes need to change in the next 12 to 18 months in order to drive greater value from AI, according to Deloitte’s new 2026 Global Tech Leadership Study.
But in a sign that there is a widening gap between ambition and capability in scaling AI, the same survey found that 80% of tech leaders are confident in their organization’s ability to deploy and govern AI capabilities at scale. Confidence, Deloitte emphasized, appears to be surging ahead of readiness.
AI CALENDAR
June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.
June 17-20: VivaTech, Paris.
July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.
July 7-10: AI for Good Summit, Geneva, Switzerland.