Main content

CES 2026 - AMD's Lisa Su does the yottaflop math, while NVIDIA's Jensen Huang puts robotaxis on a collision course with Elon Musk

Stuart Lauchlan Profile picture for user slauchlan January 7, 2026
Summary:
In a age when there's voracious demand for AI compute capacity, the chipmakers rule the roost...as we saw at CES this week.

AMD CEO
Lisa Su

It was a case of ‘chips with everything’ as the new AI kingmakers took to center stage at the CES 2026 jamboree over the past 48 hours - where NVIDIA’s Jensen Huang found himself having to share the spotlight with his AMD counterpart, Lisa Su.

First up was Su who declared that the world is set to enter the 'yottascale' era as demand for AI and the power to fuel it continues to accelerate. Su argued that we’re going to need up to 10 yottaflops a year by the end of the decade.

OK, hands up out there - what’s a yottaflop in real money? No, me neither.

Actually it’s a one followed by 24 zeros, which Su pointed out is around 10,000 times the amount of global AI compute registered in 2022, which then stood at about one zettaflop (a one followed by 21 zeros). She told CES attendees:

Since the launch of ChatGPT a few years ago, we’ve gone from a million people using AI to now more than a billion active users. This is just an incredible ramp. It took the internet decades to reach that same milestone. Now, what we are projecting is even more amazing. We see the adoption of AI growing to over 5 billion active users as AI truly becomes indispensable to every part of our lives, just like the cell phone and the internet of today.

The foundation of AI is compute power, she reminded them:

With all of that user growth, we have seen a huge surge in demand in the global compute infrastructure, growing from about one Z-flop in 2022 to more than 100 Z-flops in 2025. Now that sounds big, that's actually 100 times in just a few years. But what you're going to hear tonight from everyone is we won't have, we don't have nearly enough compute for everything that we can possibly do. 

We have incredible innovation happening. Models are becoming much more capable. They're thinking and reasoning, they're making better decisions, and that goes even further when we extend that to agents overall. So to enable AI everywhere, we need to increase the world's compute capacity another hundred times over the next few years to more than 10 yottaflops over the next five years.

That means that the entire tech ecosystem needs to come together, she urged:

What we like to say is, the real challenge is how do we put AI infrastructure at yottascale? That requires more than just raw performance. It starts with leadership compute, CPUs, GPUs, networking coming together. It takes an open modular rack design that can evolve over product generations. It requires high-speed networking to connect thousands of accelerators into a single unified system. And it has to be really easy to deploy.

AMD had a good AI year in 2025, stealing some of the gilt from the NVIDIA gingerbread in the process and boosting its own share price by 76%, way ahead of NVIDIA’s growth rate of 30%. Little wonder then that Su said:

AI is the most important technology of the last 50 years, and I can say it's absolutely our number one priority at AMD.

Drive in my car

Over at NVIDIA, on the first of what will inevitably be thousands of public appearances on the world stage while the AI bubble holds, CEO Huang pitched:

Every 10 to 15 years, the computer industry resets - a new shift happens, and each time, the world of applications target a new platform. Except this time, there are two simultaneous platform shifts happening at the same time...The entire stack is being changed. Computing has been fundamentally re-shaped as a result of accelerated computing, 

And in his soundbite moment, he declared:

The ChatGPT moment for physical AI is almost here — when machines begin to understand, reason and act in the real world”

Now, that might have a certain air of deja vu all over again about it as it’s only 12 months ago on the same stage when Huang introduced NVIDIA’s Cosmos world platform and described robotics’ “ChatGPT moment” as “around the corner.”

We’ve still not turned that corner it seems, despite all the commentary about the speed at which the AI hype cycle is moving. But things take time, suggested Huang:

 The challenge is clear...The physical world is diverse and unpredictable.

The use case he pitched to back up his physical AI vision centered on robotaxis, with the launch of new AI tech that NVIDIA claims will help self-driving cars think like humans to navigate more complex situations.

The new offering, Alpamayo, is designed to help self-driving cars handle situations, such as unexpected roadworks or unpredictable driver behavior on the road, in real time, rather than just being trained to react based on previous patterns. Huang explained:

Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decision. It’s the foundation for safe, scalable autonomy....Our vision is that someday, every single car, every single truck, will be autonomous.

To demonstrate this in action, he rolled a single-shot video of a Mercedes vehicle kitted out with Alpamayo driving in busy downtown San Francisco traffic. A human driver sat behind the wheel throughout the drive but did not stage any interventions (Alpamayo is a Level 2 autonomous driving system — similar to Tesla’s Full Self-Driving — which requires a human driver to remain attentive behind the wheel at all times.)

The first fleet of Alpamayo-powered robotaxis, built in the 2025 Mercedes-Benz CLA vehicles, is due to launch in the US in the first quarter, followed by Europe in the second quarter and Asia later in 2026.

My take

Of course NVIDIA’s taxi ambitions surely put it on a collision course with Elon Musk’s Tesla empire? Maybe so, but the view from the cheap seats from the Presidential on/off BFF elicited a blunt comment on X:

Well that's just exactly what Tesla is doing. What they will find is that it's easy to get to 99% and then super hard to solve the long tail of the distribution.

And he concluded:

 I’m not losing any sleep about this.

Oh, such larks from those tech bros, eh?

Onwards!

Image credit - CES feed

Loading
A grey colored placeholder image