Your browser doesn’t support HTML5 audio
Adobe is holding its annual user event in Las Vegas this week, where it has revealed a big shift to agentic AI with the launch of CX Enterprise and the addition of Artificial Intelligence (AI) assistants to many of its creativity tools.
To support its agentic AI push, CEO Shantanu Narayen brought out his long-time friend and collaborator, NVIDIA CEO Jensen Huang, to join him on the keynote stage.
As Adobe pushes its AI-first product line, Huang outlined how the core interface for software, whether that’s imaging or video tools or SaaS systems, is now all about the AI agents. He said:
We used to use tools by point and click and loading files and dragging down menus. I think my entire vocabulary of Photoshop is probably seven percent of it's capabilities. Now I have an agentic system that I can put in front of it, and an agentic system that Adobe can create for me, that allows it to understand my intentions and fully realize the capabilities of Photoshop, fully utilize Premiere. The user interface of the future, all the front end of SaaS, is now agentic, it's intelligent, allows us to interact directly to the tool if we like, through an agent if we like, in collaboration if we like.
Huang urged delegates to engage agentic AI as soon as they can. He noted:
I know that it's continuing to evolve, but I'm almost certain, dead certain, that your organizations are going to find it surprisingly amazing, relative to what's happened last year.
Hard worker
In the last several months, there’s been a very significant transition, evolving the technology from generative AI, which is able to understand what we say and produce information back to us, to agentic AI that can now perform work. Huang said:
It's now collaborating with me and collaborating with all of our engineers and supporting us in all these different ways. For the very first time, AI is producing work. That's another way of saying, AI is finally valuable. People love AIs that know it all, but in the final analysis, what you pay for is work done. Not the fact that there's an AI that knows it all. And so we're now getting real work done.
While the thought of AI agents across every piece of software might send shivers down the spine of certain workers concerned about the technology putting them out of a job, that’s not how Huang views the situation. He cited a study carried out into the impact of deep learning and computer vision. Shortly after the technology was being applied in the real world, researchers identified radiology as the first profession to be wiped out by AI. Nobody would be able to be a radiologist, it warned, because every single CT and radiology scan would be studied by AI. He added:
Well, 10 years later, it is completely true. 100% of radiology is now assisted by AI. Computer vision is completely superhuman. And the interesting thing is, the number of radiologists, the demand went up.
This is because the purpose versus the task of a job has to be separated. The task of a radiologist includes studying scans, but the purpose of the job is to work with clinicians, doctors and patients to help diagnose this disease. Huang noted:
The fact that these radiologists can now study scans so fast, they order more scans from more modalities. As a result, they're able to onboard patients a lot more quickly. The number of patients in a hospital can go up. The hospitals make more money taking care of more patients. Radiologists, busier than ever, demand more money.
He sees the same thing happening to software engineers. At Nvidia, all its software engineers are now supported by agents, but they're busier than ever because the experimentation is happening a lot more quickly, and every single idea is expressed into code instantaneously. Huang said:
More software engineers are working with each other, coming up with new ideas, new problems that we never even think of solving before because we just didn't have the time to do before. The fact that we're now so productive, we can experiment and iterate so fast, we're going to be busier than ever.
Let’s get physical
Huang also fleshed out his vision around how the next innovation for AI will be on the physical side. He explained:
The vast majority of the world is physical, and we want to be able to apply computing really for the first time to some of the largest industries in the world, whether it's life sciences or logistics and manufacturing or transportation.
But in order to do this, if the computer can’t understand the physical world, it’s impossible to enhance or modify products. He explained:
We're finally at a time where, if you simply realize that if you can go from language to images, images to language, why can't we go from language to chunks of actions, which is articulation; why can't we go from camera input to action. Vision, language, action models, just like LLMs, these are VLAs. We now have the ability to go directly from one sensory input to direct actioning on the computing system or the physical system. But in order to do that, you have to understand the physical world.
NVIDIA’s work around physical AI is dubbed Omniverse, which sees autonomous systems like cameras and self-driving cars become much smarter. Huang explained the need for a high fidelity, truthful representation of many of the things we do in life to support these plans:
The reason for that is because many of the people in this room are producing products and they're marketing those products, but those products are very specific. That product needs to be precise, the brand identity has to be precise, the design is precise. It's not an approximate representation of the product. And so many of the things that we do need to have a perfect digital plan. That starting point cannot be negotiated.
From there, generative AI can be used to express the product and put it in all kinds of different environments - if it's in a forest, an approximate forest is fine; if it's on a mountain, an approximate mountain is fine. He added:
But the product has to be precise. And so we need a digital representation of that grounded in its most precise representation, which is through graphics. And so the work that [Nvidia and Adobe] do together, creating a structured, precise, digital representation of the artifact, it could be a car, it could be a compute model, it could be a person, whatever it is. But that starting point is really vital. From there, we can then integrate it with generative AI and express our creativity to that.
My take
It’s always handy having friends in high places, and the decades-long bond between the two CEOs – both immigrants to the US, as Narayen noted – is a boon for Adobe as it makes its agentic AI bid, supported by Nvidia tech. We’ll see that in evidence later in the week in Las Vegas, during a demo of CX Enterprise Coworker, powered by NVIDIA Agent Toolkit.
Adobe will continue to unveil and demo its agentic AI products through the week, and I’ll be covering some of the capabilities, and customer reaction, in more depth. Sneak preview – I was sat next to and in front of a group of designers at the Monday keynote. When the new Firefly AI assistant was being demoed, they were wowed by it.