Welcome to Eye on AI. In this edition…President Trump takes aim at state AI regulations with a new executive order…OpenAI unveils a new image generator to catch up with Google’s Nano Banana….Google DeepMind trains a more capable agent for virtual worlds…and an AI safety report card doesn’t provide much reassurance.
Hello. 2025 was supposed to be the year of AI agents. But as the year draws to a close, it is clear such prognostications from tech vendors were overly optimistic. Yes, some companies have started to use AI agents. But most are not yet doing so, especially not in company-wide deployments.
A McKinsey “State of AI” survey from last month found that a majority of businesses had yet to begin using AI agents, while 40% said they were experimenting. Less than a quarter said they had deployed AI agents at scale in at least one use case; and when the consulting firm asked people about whether they were using AI in specific functions, such as marketing and sales or human resources, the results were even worse. No more than 10% of survey respondents said they had AI agents “fully scaled” or were “in the process of scaling” in any of these areas. The one function with the most usage of scaled agents was IT (where agents are often used to automatically resolve service tickets or install software for employees), and even here only 2% reported having agents “fully scaled,” with an additional 8% saying they were “scaling.”
A big part of the problem is that designing workflows for AI agents that will enable them to produce reliable results turns out to be difficult. Even the most capable of today’s AI models sit on a strange boundary—capable of doing certain tasks in a workflow as well as humans, but unable to do others. Complex tasks that involve gathering data from multiple sources and using software tools over many steps represent a particular challenge. The longer the workflow, the more risk that an error in one of the early steps in a process will compound, resulting in a failed outcome. Plus, the most capable AI models can be expensive to use at scale, especially if the workflow involves the agent having to do a lot of planning and reasoning.
Many firms have sought to solve these problems by designing “multi-agent workflows,” where different agents are spun up, with each assigned just one discrete step in the workflow, including sometimes using one agent to check the work of another agent. This can improve performance, but it too can wind up being expensive—sometimes too expensive to make the workflow worth automating.
Are two AI agents always better than one?
Now a team at Google has conducted research that aims to give businesses a good rubric for deciding when it is better to use a single agent, as opposed to building a multi-agent workflow, and what type of multi-agent workflows might be best for a particular task.
The researchers conducted 180 controlled experiments using AI models from Google, OpenAI, and Anthropic. It tried them against four different agentic AI benchmarks that covered a diverse set of goals: retrieving information from multiple websites; planning in a Minecraft game environment; planning and tool use to accomplish common business tasks such as answering emails, scheduling meetings, and using project management software; and a finance agent benchmark. That finance test requires agents to retrieve information from SEC filings and perform basic analytics, such as comparing actual results to management’s forecasts from the prior quarter, figuring out how revenue derived from a specific product segment has changed over time, or figuring out how much cash a company might have free for M&A activity.
In the past year, the conventional wisdom has been that multi-agent workflows produce more reliable results. (I’ve previously written about this view, which has been backed up by the experience of some companies, such as Prosus, here in Eye on AI.) But the Google researchers found instead that whether the conventional wisdom held was highly contingent on exactly what the task was.
Single agents do better at sequential steps, worse at parallel ones
If the task was sequential, which was the case for many of the Minecraft benchmark tasks, then it turned out that so long as a single AI agent could perform the task accurately at least 45% of the time (which is a pretty low bar, in my opinion), then it was better to deploy just one agent. Using multiple agents, in any configuration, reduced overall performance by huge amounts, ranging between 39% and 70%. The reason, according to the researchers, is that if a company had a limited token budget for completing the entire task, then the demands of multiple agents trying to figure out how to use different tools would quickly overwhelm the budget.
But if a task involved steps that could be performed in parallel, as was true for many of the financial analysis tasks, then multi-agent systems conveyed big advantages. What’s more, the researchers found that exactly how the agents are configured to work with one another makes a big difference, too. For the financial-analysis tasks, a centralized multi-agent syste—where a single coordinator agent directs and oversees the activity of multiple sub-agents and all communication flows to and from the coordinator—produced the best result. This system performed 80% better than a single agent. Meanwhile, an independent multi-agent system, in which there is no coordinator and each agent is simply assigned a narrow role that they complete in parallel, was only 57% better than a single agent.
Research like this should help companies figure out the best ways to configure AI agents and enable the technology to finally begin to deliver on last year’s promises. For those selling AI agent technology, late is better than never. For the people working in the businesses using AI agents, we’ll have to see what impact these agents have on the labor market. That’s a story we’ll be watching closely as we head into 2026.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
A grassroots NIMBY revolt is turning voters in Republican strongholds against the AI data-center boom —by Eva Roytburg
Accenture exec gets real on transformation: ‘The data and AI strategy is not a separate strategy, it is the business strategy’ —by Nick Lichtenberg
AWS CEO says replacing young employees with AI is ‘one of the dumbest ideas’—and bad for business: ‘At some point the whole thing explodes on itself’ —by Sasha Rogelberg
What happens to old AI chips? They’re still put to good use and don’t depreciate that fast, analyst says —by Jason Ma
AI IN THE NEWS
President Trump signs executive order to stop state-level AI regulation. President Trump signed an executive order giving the U.S. Attorney General broad power to challenge and potentially overturn state laws that regulate artificial intelligence, arguing they hinder U.S. “global AI dominance.” The order also allows federal agencies to withhold funding from states that keep such laws. Trump said he wanted to replace what he called a confusing patchwork of state rules with a single federal framework—but the order did not contain any new federal requirements for those building AI models. Tech companies welcomed the move, but the executive order drew bipartisan criticism and is expected to face legal challenges from states and consumer groups who argue that only Congress can pre-empt state laws. Read more here from the New York Times.
Oracle stock hammered on reports of data center delays, huge lease obligations. Oracle denied a Bloomberg report that it had delayed completion of data centers being built for OpenAI, saying all projects remain on track to meet contractual commitments despite labor and materials shortages. The report rattled investors already worried about Oracle’s debt-heavy push into AI infrastructure under its $300 billion OpenAI deal, and investors pummeled Oracle’s stock price. You can read more on Oracle’s denial from Reuters here. Oracle was also shaken by reports that it has $248 billion in rental payments for data centers that will commence between now and 2028. That was covered by Bloomberg here.
OpenAI launches new image generation model. The company debuted a new image generation AI model that it says offers more fine-grained editing control and generates images four times faster than its previous image creators. The move is being widely viewed as an effort by OpenAI to show that it has not lost ground to competitors, in particular Google, whose Nano Banana Pro image generation model has been the talk of the internet since it launched in late November. You can read more from Fortune’s Sharon Goldman here.
OpenAI hires Shopify executive in push to make ChatGPT an ‘operating system’ The AI company hired Glen Coates, who had been head of “core product” at Shopify, to be its new head of app platform, working under ChatGPT product head Nick Turley. “We’re going to find out what happens if you architect an OS ground-up with a genius at its core that use its apps just like you can,” Coates wrote in a LinkedIn post announcing the move.
EYE ON AI RESEARCH
A Google DeepMind agent that can make complex plans in a virtual world. The AI lab debuted an updated version of its SIMA agent, called SIMA 2, that can navigate complex, 3D digital worlds, including those from different video games. Unlike earlier systems that only followed simple commands, SIMA 2 can understand broader goals, hold short conversations, and figure out multi-step plans on its own. In tests, it performed far better than its predecessor and came close to human players on many tasks, even in games it had never seen before. Notably, SIMA 2 can also teach itself new skills by setting its own challenges and learning from trial and error. The paper shows progress towards AI that can act, adapt, and learn in environments rather than just analyze text or images. The approach, which is based on reinforcement learning—a technique where an agent learns by trial and error to accomplish a goal—should help power more capable virtual assistants and, eventually, real-world robots. You can read the paper here.
AI CALENDAR
Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.
Jan. 19-23: World Economic Forum, Davos, Switzerland.
Feb. 10-11: AI Action Summit, New Delhi, India.
BRAIN FOOD
Is it safe? A few weeks ago, the Future of Life Institute (FLI) released its latest AI Safety Index, a report that grades leading AI labs on how they are doing on a range of safety criteria. A clear gap has emerged between three of the leading AI labs and pretty much everyone else. OpenAI, Google, and Anthropic all received grades in the “C” range. Anthropic and OpenAI both scored a C+, with Anthropic narrowly beating OpenAI on its total safety score. Google DeepMind’s solid C was an improvement from the C- it scored when FLI last graded the field on their safety efforts back in July. But the rest of the pack is doing a pretty poor job. X.ai and Meta and DeepSeek all received Ds, while Alibaba, which makes the popular open source AI model Qwen, got a D-. (DeepSeek’s grade was actually a step up from the F it received in the summer.)
Despite this somewhat dismal picture, FLI CEO Max Tegmark—ever an optimist—told me he actually sees some good news in the results. Not only did all the labs pull up their raw scores by at least some degree, more AI companies agreed to submit data to FLI in order to be graded. Tegmark sees this as evidence that the AI Safety Index is starting to have its intended effect of creating “a race to the top” on AI safety. But Tegmark also allows that all three of the top-marked AI labs saw their scores for “current harms” from AI—such as the negative impacts their models can have on mental health—slip since they were assessed in the summer. And when it comes to potential “existential risks” to humanity, none of the labs gets a grade above D. Somehow that doesn’t cheer me.
FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD
Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below:
–The 3 trends that dominated companies’ AI rollouts in 2025.
–2025 was the year of agentic AI. How did we do?
–AI coding tools exploded in 2025. The first security exploits show what could go wrong.
–The big AI New Year’s resolution for businesses in 2026: ROI.
–Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?