Your browser doesn’t support HTML5 audio
It’s a rule of thumb in today’s super-heated tech sector that analyst reports will rush in where robots fear to tread. Keen to capitalise on the current marketing and investment trend of re-defining robots as ‘physical AIs’ in the hope that AI hype and heat-seeking capital will cross the chasm into robotics, analysts are falling over themselves to make bold predictions for physical AI’s future.
The problems with analysts’ desire to capitalize on this trend are manifold. First, the last thing the robotics sector needs is hype-driven, impatient capital, spurred on by 'me-too' reports. Instead, what serious robotics needs is investors who are prepared to back the slow, iterative, safe, secure, and standards-driven development of systems that solve complex, real-world problems.
Re-heating robotics in the form of performative brand extensions simply attracts performative investors who will be on the phone by year end, demanding their promised billions in profit, alarmed at the ever-escalating cost of high-risk, long-term, speculative investments in this sector. Yet that is the reality.
Bandwagon chasing can only be counter-productive, therefore, and it does little to solve the very real problems that robots may help us with in the future, such as the growing labour gap caused by the postwar baby boom and today’s ageing populations, falling birth rates, and disaffected, debt-burdened youth – many of whom may be put out of work by AI.
And second, analysts are forced to lump together a broad range of often unrelated and sometimes ageing systems under the heading ‘physical AI’, in the hope that this constitutes a sensible analysis of a single new market.
Filling the labor gap
That is the broad context against which I read a report published this week by analysts at Juniper Research, Physical AI in Manufacturing & Logistics 2026-2030. In some ways, it makes a better and more informed stab at the sector than similar documents last year (see diginomica, passim), but it still falls into the trap it sets for itself by making vague, sweeping statements about ‘physical AI’, based on shifting definitions.
Some of the analysis is cogent and reasonable: for example, it accurately describes the emerging labour gap caused by the postwar demographic timebomb. But Juniper Research then directly links that with falling workforce numbers in highly automated countries, such as Japan and South Korea, where manufacturing output (which Juniper measures by value added, rather than in absolute terms) has increased, according to its figures.
In Juniper’s estimation, automation is taking up the slack caused by human absence, therefore, and increasing productivity in the process. But is that a sufficient description of what is happening in the real world? It is just as likely that human jobs have simply been automated and workforces cut as a result. Look at the IT sector itself as an example: tens of thousands of jobs were shed by Big Techs and hyperscalers in recent months, as their own AI investments increased.
In the logistics space, big beast Amazon has a workforce of a million robots of various types and 1.5 million people, but with a reported strategy – according to an October 2025 story in the New York Times, sourced from leaked internal communications – of automating up to 75 percent of jobs in its Customer Fulfilment Centers (CFCs).
That has nothing to do with the demographic timebomb facing developed economies: it is entirely about using robots to slash human resources costs and increase profits and productivity. Since last October alone, Amazon has shed at least 30,000 jobs.
So, it seems likely that this broad approach to automation is what is happening in countries like South Korea and Japan, though the demographic timebomb certainly exists. Remember, since the launch of ChatGPT, report after report has told us that the core reason for investing in AI is users’ desire to slash costs and increase productivity, not to make smarter decisions or fill the gaps caused by demographic change.
So, Juniper’s suggestion that current automation is, largely, a response to falling worker numbers is flimsy, though my own interviews with robotics CEOs in recent years have suggested that robots pick up the jobs that humans no longer want. Either way: case not proven.
The report also claims that with e-commerce volumes surging, companies will need to invest in ‘physical AI’ to keep customers satisfied. But another thing that is far from proven in the real world is that companies can emulate Amazon’s example in the ecommerce logistics and fulfilment space with a massive investment in robotic automation and ‘physical AI’.
Take the most obvious example: the long-running saga of Ocado and its on/off partners and clients in North America. As Stuart Lauchlan’s ongoing coverage of that story has shown (see diginomica, passim), it is not a model that guarantees success: you can’t simply automate your way to profit, productivity, and market dominance.
That is because AI-enabled, robotic CFCs demand a massive upfront investment, and profits are far from guaranteed for their users – and may take years to arrive (if they do). This is why several Ocado customers have scaled back their plans or abandoned CFC programs entirely in preference to more local, human-delivered, lower-cost solutions.
Defining its terms?
But back to the report. So, how does the analyst firm define ‘physical AI’ – which, after all, is the subject of this high-end, high price-tag document? This is where things get tricky.
On the surface, its definition appears sound: “AI systems that can perceive, reason and act in a real-world environment”, which suggests it refers to new generations of intelligent robots that gather data about the physical world and act accordingly The report says:
These AI systems use inputs such as images, video, text, speech, or sensor data, and convert it into actions that an autonomous machine can perform. Such inputs enable these advanced physical systems to gain an understanding of spatial relationships and the physical behaviour of their real-world environments.
Physical AI is enabling machines to autonomously interpret their environment and make decisions in real time; learning continuously via reinforcement learning, imitation learning, and through simulations.
These things are all happening, but the description begins to muddy the waters, as it suggests that physical AI is something other than the robot, when previous definitions have suggested that the robot is the physical AI: the means by which software becomes mobile and interacts with spaces, objects, and people.
So, at this point, Juniper seems to be suggesting that this is really just a report about software getting smarter via the influx of data from industrial and logistics applications, digital twins, simulation, and more. But it is calling it ‘physical AI’, to suggest it is part of some new paradigm for investors and buyers. The report continues:
Breakthroughs that have led to growing interest in physical AI include Vision-Language-Action (VLA) models, high-fidelity simulations, deep reinforcement and imitation learning, advanced sensing, modern actuators, and cloud-to-edge compute for real-time autonomy.
Again, that is fair enough as a description of the training of intelligent robots, though it omits the core technology of physical AI systems: a World Foundation Model, around which those other models and components typically revolve (see my robotics reports earlier this year).
But then Juniper’s report muddies the water still further, by saying that “when referring to systems that can use physical AI in manufacturing and logistics” they include:
• Autonomous Mobile Robots (AMRs), which use sensors, AI, and simultaneous localization and mapping (SLAM) to plan routes, avoid obstacles, and adapt to changes in their environment in real time.
• Automated Guided Vehicles (AGVs), which follow fixed paths or tracks for material transportation, using guidance systems including magnetic tape, markers, or laser guidance.
• Humanoid robots and collaborative robots (cobots), operating alongside human workers.
• Robotic Picking, Sorting, and Packaging systems – typically robotic arms that identify, grab, sort, and package items at speed.
• Plus Automated Inspection and Quality systems.
Humanoids aside, that is essentially a rundown of robots that have been operating in the logistics space for years: as we have seen, Amazon has a fleet of a million such systems already across all those robot categories (and is now experimenting with humanoids, such as Agility Robotics’ Digit, and with new robotic fleet management systems).
And some of the robots in Juniper’s analysis – AGVs, for example – are clearly not intelligent, autonomous physical AIs learning about the real world. They are merely automated systems following fixed paths on warehouse floors. So why are they included in the list?
So, the report has begun falling into the same trap as other bandwagon-jumping analyses in this sector, which have lumped together long-established, ageing, and sometimes unrelated systems to suggest that a single new market has appeared out of nowhere.
No-one doubts that industrial and logistics systems are getting smarter and more autonomous, of course, drawing in data from a host of different sources. That much is clear, and truly general-purpose, generally intelligent humanoids may emerge from that process in the years ahead. My point is the lack of clarity in this and similar reports, which package up complex processes and diverse systems under the single heading of ‘physical AI’.
Headlines and hidden stories
On that point, the report’s headline finding is truly baffling: the company claims that deployments of “physical AI systems” in manufacturing and logistics will reach 400,000 by 2030, up from just 11,000 this year: an increase of over 3,500 percent.
Some statistics have ‘nonsense’ written through them like a stick of rock, and this is one of them. What is Juniper Research talking about or referring to here? Precisely what – and where – are those 11,000 systems today? And by what measure will they increase by a staggering 3,500-plus percent, which is several times the predicted growth of the entire AI software sector in the same timescale?
Are the authors now referring to robots? If so, which ones?
Now contrast its figures with ones from the International Federation of Robotics (IFR), whose 2025 annual report found that a total of 4,664,000 industrial robots were in active service worldwide in 2024, up nine percent year on year. The organisation predicted sustained industrial robot growth of six to seven percent a year from 2025, hitting a forecast high of 708,000 new installations by 2028.
However, even that forecast was in doubt, because the IFR’s own research last year showed that industrial robot growth – i.e. robots used in the manufacturing sector – was falling in four of the world’s top five markets: Japan, the US, South Korea, and Germany. Much of the sector’s growth was, and is, driven by just one nation: China, where demand for industrial robots was up by just seven percent year on year.
Between them, those five countries represented over eighty percent of new robot purchases in 2024-25 and seventy-six percent of the world’s operational stock, so if growth is falling in four of them, that is significant. So, it was hard to see any reason for the IFR’s belief that the market would hit over 700,000 installations by 2028, especially when geopolitical tensions, trade barriers, tariffs, and other costs remain high. But time will tell, of course.
So, Juniper Research’s figures don’t seem to refer to robots as such. But then things get muddier still. The report goes on to say that “physical AI software spend” in 2026 will be $1.3 billion, rising to $8.9 billion in 2030. But that is an increase of just over 575%, not 3,500+. So, if that is the software component of the market – however that market and that software are being defined [shrugs to camera] – then what is the segment that will increase by 3,500 percent? Presumably this now refers to hardware, but what hardware? And where are the 11,000 examples of it?
In short, what are they talking about?
This is truly baffling. That aside, Molly Gatford, Senior Research Analyst at Juniper Research, explains the firm’s vision:
Multiple technological advancements are converging to accelerate physical AI adoption. Reduced latency from improved real-time processing is enabling more reliable real-world operation, while more advanced models allow systems to respond to a broader range of inputs, including tactile data; improving how physical AI interacts with its environment.
The report itself continues:
With key technical barriers now being addressed, vendors must move beyond development towards largescale deployment of physical AI systems. To do this, vendors must partner with connectivity providers; as reliable, lowlatency connectivity is essential to support realtime decisionmaking, which is vital for physical AI deployment.
Gatford adds:
Vendors must prioritise connectivity partners that offer edge-enabled connectivity architectures; allowing physical AI systems to process data locally and reduce latency constraints. This becomes essential with advanced physical AI models now requiring processing of data from multiple different sensor inputs.
My take
Well, that clears that up. Or not, as the case may be.
This is precisely what happens when you spin up a catch-all term like ‘physical AI’ to galvanize buyers and investors but haven’t decided what it actually refers to.
I honestly don’t know what this report is actually about. Is it about logistics AI? Logistics robots? A combination of the two – with digital twins, sensors, simulation, VLAs, and other technologies thrown in? If so, where are those 11,000 systems today and who makes them, and will there really be 400,000 of them in the future?
As the saying goes, “You pays your money and you makes your (own) choice". (A four figure sum in the case of this report...).