Your browser doesn’t support HTML5 audio
We live in interesting times, to misquote a non-existent Chinese proverb (Hey, it’s the AI age, who cares about facts anymore?)
But for Taiwanese American Jensen Huang, things are especially interesting. Earlier this week, the NVIDIA President and CEO gave one of his marathon keynotes at the company’s GTC conference in San Jose. It was like techie Shakespeare delivered in a low key, minus all the drama and beauty, a tiny grey man center-stage dwarfed by his creations, rack upon rack of bleak.
During the latest opus, he reeled off no fewer than 18 product launches among other announcements, spoke of trillion-dollar AI chip revenues in a two-year timescale, set out a vision of accelerating the rise of physical AI (robots), and hailed free, open, autonomous agent platform OpenClaw as the next ChatGPT. The project was acquired by OpenAI in February, though it will move to a foundation to protect its ethos. (If you can’t trust Sam Altman’s promise not to commercialise something, then what can you trust?)
And yet NVIDIA’s share price actually fell on the back of the keynote, finishing lower at the end of Huang’s speech than it had been at the beginning. Curse you, implacable storm of fate! Did this titan bestride the vast stage for naught? Granted, NVIDIA remains the world’s most valuable company by far – it is worth over $600 billion more than Apple at number two (a difference equal to the market cap of Exxon Mobil) – but this was still, well, interesting.
Why?
One explanation is that there are growing concerns at the colossal CapEx cost of America’s AI infrastructure, which dwarfs software income. Seventy-six percent of the planet’s projected $1.37 trillion AI hardware spend in 2026 will be in the US, an order of magnitude larger than America’s software revenues.
And that is not to mention the amount of energy that data centers are using. Earlier this week, research from BestBrokers.com found that ChatGPT alone uses enough energy in one year to power 2.1 million households over the same timescale, or all of France for seventeen days: 22.15 TWh. With war spreading in the Middle East, such considerations are vital, while the conflict is also impacting the semi-conductor supply chain.
(In the US at this very moment, farmers are being persuaded to hand over their land for tens of millions of dollars by hyperscalers, so that family farms can become data farms. Food security? That’s so 20th Century.)
But is Wall Street starting to fear that tall stories may extend to Huang’s forecast of demand for AI chips? He predicts a $1 trillion order book over the next two years, far exceeding consensus analyst predictions of $835 billion by the end of 2028. With doubts remaining over AI’s enterprise ROI, analysts are right to be sceptical – and NVIDIA’s politely-dipping share price expressed that well.
What?
But what if Huang is right? By spooky coincidence, that $1 trillion is roughly equivalent to America’s projected AI hardware spend this year. So, is he implying that all of it will be on NVIDIA chips? Or is Huang talking about the AI industry as a whole, rather than just his company’s order books? That provides some plausible deniability if anyone accuses him of overstating projections. But in an era in which, thanks to AI, nothing we read, see, or hear can ever be trusted again, who can say anything with confidence today?
So, what else did Huang announce, alongside that bold chip revenue forecast? Among other product stories, the company launched the NVIDIA Vera CPU, which is claimed to be the first processor purpose-built for agentic AI and reinforcement learning; plus, NVIDIA Dynamo 1.0 open-source software for generative and agentic inference at scale; and the NVIDIA Space-1 Vera Rubin Module, which is designed for orbital data centers (ODCs) of the kind that SpaceX recently claimed to be launching a million of. (Yes, Elon wants the twinkling firmament to have a Starlink logo on it.)
Meanwhile, NVIDIA Agent Toolkit provides open-source models and software for developers building tools that, the company claims, “scale productivity by autonomously determining how to complete assigned tasks”. (Shh, don’t mention the 2025 research from MIT’s NANDA autonomous agent division saying that 95% of projects produce no measurable value...)
It now includes NVIDIA OpenShell – an open-source runtime that enforces policy-based security, network and privacy guardrails that make autonomous agents, or ‘claws’, safer to deploy, said Huang. That's an interesting branding choice - if you want people to embrace AI rather than be intimidated by it, call AI agents ‘claws’. (It was originally just an in-joke spin on ‘Claude’, with the joke now firmly on Anthropic, as OpenClaw has jumped into bed with OpenAI instead.) Then again, it’s the perfect description if you assume the gloves are off, and there are claws inside.
Also announced was the NVIDIA NemoClaw stack for the OpenClaw agent platform, which adds privacy and security controls to make self-evolving, autonomous AI agents more trustworthy, scalable, and accessible to the world. (Claws you can trust! Ker-ching!) Huang said:
OpenClaw opened the next frontier of AI to everyone and became the fastest-growing open-source project in history. Mac and Windows are the operating systems for the personal computer. OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for — the beginning of a new renaissance in software.
Peter Steinberger, self-effacing creator of OpenClaw, added:
OpenClaw brings people closer to AI and helps create a world where everyone has their own agents. With NVIDIA and the broader ecosystem, we’re building the claws and guardrails that let anyone create powerful, secure AI assistants.
Then Huang moved on to AI factories (integrated AI data centers for training and scaling models), plus real factories of the kind that will increasingly be staffed by robots. Huang announced the NVIDIA Physical AI Data Factory Blueprint, an open reference architecture that unifies and automates how robots’ training data can be generated, augmented and evaluated, reducing the costs, time and complexity of training physical AIs.
The blueprint enables developers to use NVIDIA Cosmos open World Foundation Models (WMFs) to transform limited training data into large, diverse datasets that are expensive and impractical to capture in the real world, said the company. What this means is that capturing 3D data from the physical world is slow and laborious, which is why building robots’ world models and intelligence is, currently, a decades-long task. Innovators such as NVIDIA are exploring the ways in which the 100,000-year Robot Data Gap I described in an earlier diginomica report can be reduced. Huang said:
Physical AI has arrived. Every industrial company will become a robotics company. NVIDIA’s full-stack platform – spanning computing, open models and software frameworks – is the foundation for the robotics industry, uniting a worldwide ecosystem to build the intelligent machines that will power the next generation of factories, logistics, transportation and infrastructure.
In fact, NVIDIA is just one of many companies that are integral to the robotics hardware supply chain. In 2025, analysts at Morgan Stanley produced a Top 25 list of critical robotics companies in a humanoid sector investment note. Nearly all of them were chip makers and hardware component suppliers.
Analysts
So, what did analysts think about all this? The day after his keynote, Huang had the opportunity to find out in a call to Wall Street thinkers who were either at the event or had watched his keynote online.
But first, he rammed home the points he made onstage in San Jose:
There were three inflection points in recent AI. The first one was generative AI. The second was reasoning. And we're at the third inflection point now, and each one builds on the others. Here we are with the third inflection point, which is agentic systems that are able to operate autonomously. They have agency, and you can give them goals. And instead of just answering questions, they can now perform tasks.
Now, when you come to work, they give you a laptop and tokens. And the token budget is now a real thing: every engineer is going to have a token budget. And the idea that you would hire a $300,000 engineer and they spend no tokens in doing their job, you’ve got to ask the question, what are they doing? And so, it is very clear now that every engineer will have a lot of tokens that they would have to consume. And those tokens are going to be produced, they have to be manufactured. And so, a computer used to be just a tool, but a computer of the future is manufacturing equipment. They're producing something that is sold.
In fact, that business model is as old as the first Industrial Revolution, in which some British companies and workhouses paid workers in physical tokens, which could only be spent within the company. What Huang is essentially saying is that, in the future, everyone’s principal task will be providing revenue for AI companies that are already worth more than most nations’ GDPs. The agent - sorry! - of this change will be OpenClaw, he said:
OpenClaw, on first principles, is really the operating system of an AI computer, a personal AI computer. And it has all the properties of an operating system of this new computer. It manages resources, it schedules, does scheduling, it does I/O and it networks, all the properties of a fundamental computer.
At this point he shared a graph in which the upward slanting red line was growth – of the kind that, so far, has failed to emerge from the AI-enabled economy, except in the stock prices of firms like NVIDIA, Microsoft, Amazon, Meta, and Alphabet. Huang continued:
Every company in the world will now need to have OpenClaw strategy. Every single company. Just as we all had our Linux strategy, just as we all had an Internet strategy, just as you had a mobile cloud strategy. Now the question is, 'What's your OpenClaw strategy, okay?'.
(Ah yes, picture it now: in every boardroom on Earth, chino-clad men will be tapping the desk thoughtfully and saying, “So, Brad, how does this track with our claws roadmap?” And somewhere at the table will be a young person – hopefully a woman, though the AI industry is VERY male – thinking “WTAF?!” quietly to themselves while trying not to scream. And if you have any sense, you will put her in charge of your business, because critical thinking will be essential, mark my words.)
But I digress. So, why the bold predictions about $1 trillion in chip revenues? Huang said:
We have to take care of customers who come out of the blue because they're desperate for more compute. Doesn't that make sense? And so, when they're desperate for more compute all of a sudden, on the last day, they say, ‘Goodness gracious, I could use more!’ I would like to be able to say – and we are always in a position to say – we'd be more than happy to help you. We're also working on new customers, new markets, new regions that we haven't put in here yet because we still have, well, about 21 months to go, okay? So, I want you [analysts and investors] to understand what that $1 trillion is. By definition, it’s going to keep growing. By definition, because what I compared it against, it will keep growing and it will be larger than that.
Um… right. Because it’s a trillion, by definition (?) it will keep growing… because people will say – and I quote – “Goodness gracious, I could use more!” And people wonder why analysts are nervous? At this point, Benjamin Reitzes of Melius Research LLC said:
I think the main pushback we get is, is the juice worth the squeeze? And will the hyperscalers have upside to their revenues for API and cloud that justify all that spend? Because I have estimates for the hyperscalers, and for now, the CapEx is 20% above their cloud/API revenue. And I'm wondering what you're seeing.
Twenty percent? That’s doing well. As noted above, America’s hardware spend is an order of magnitude larger than the value of the entire AI software market, but it seems even the hyperscalers are struggling to make the sums add up. Huang said:
I wish those companies were public. And the reason for that is because then you'll see what I see. No company in history has ever grown as a start-up company and increased revenues by $1 billion or $2 billion a week. That's what they're experiencing right now.
He continued:
In the future, those agents will be integrated into the IT industry. This IT industry is $2 trillion of software licenses today. It's probably going to be – let me just pick a random number – $8 trillion that also resells an enormous amount of tokens. One hundred percent of the world's IT industry will become resellers of OpenAI and Anthropic. Are you guys following me? No?”
He ploughed on:
Their software is going to be offered directly to enterprises, but it's also going to be integrated and become domain-specific and specialized, governed, secured, easily provisioned, connected to their system of records, and so on. There's going to be a whole agentic system that will be rented to customers, but they still would have to consume tokens through factories.
And so, if it comes down through OpenAI, terrific. And if it comes down through Anthropic, terrific. And if it comes down through open models, terrific. But they all have to have tokens generated. IT companies of the past licensed software, but IT companies of the future will rent tokens, and will generate tokens. Their business models will change. The companies will become bigger. Their gross margins will change. And so, this is exciting for them. Super exciting for them.
Then switching his attention to robots in response to another question, he said:
Let's say the physical AI inflection happens, then the industrial side has to be done on-prem. It has to be done at the edge. It has to be done on location. It has to be done in the factory. Then all of a sudden […] the world's industries that are related to physical AI will be much, much larger than the industries related to digital AI.
“Something like $70 trillion of the world's industries, or $50 trillion, or $60 trillion will require physical AI because the world is happening not in our laptop, the world happens out where the world is. And so, there's a lot of atom-related businesses that simply can't be taken care of without physical AI. And the world is going to produce tokens every single day, continuously. It will not stop.
My take
Nothing to see here, just the CEO of the world’s most valuable company picking a random number for a room full of Wall Street analysts.
Make no mistake, Huang comes across as being all about brand, appearance, and perception trumping reality. It’s important we believe things, no matter how credible or otherwise they might sound, which is par for the course in AI today, an industry built on credence and circular economics, of titan selling product to titan while spouting futurist babble about the infinite largesse we will all receive if we just subscribe to the world’s stolen data.
Welcome to the future, folks: one of claws, relentless token manufacture (with no mention of the energy and carbon cost), and robot-run business. A future that generates so much wealth that a man who wears a $1.5 million watch while effecting not to has to pitch random trillions for a room full of Wall Street money men.
Feeling confident?
And the share price fell...