Robot futures – why robophobia is a bigger problem than technologists realise
- Summary:
- As billions of dollars pour into the robotics sector, on the back of the AI Spring, research suggests that many people’s fear of robots may be being overlooked.
Why should robots look like human beings? It’s a good question as the humanoid hype machine kicks into gear, and billions of dollars of impatient capital flood into a sector that, for years, has been about slow, iterative development and safety.
The answer is that there are several reasons to build a human-like machine. Among them are: our world is designed for people, so sharing spaces with robots demands they can use them in the same way that we do; a humanoid form gets the robot’s hands to where they need to be; it’s a great engineering challenge; and it may inspire people to program them.
But another reason is that people may instinctively understand how to interact with machines that resemble us. They will know how to talk to them, hand them things, and give them instructions, simply because they look human – at least in theory.
But it is not as straightforward as that. For one thing, our intuitive response to humanoid robots is – currently – an obstacle to finding viable applications for them in our complex, unpredictable, and emotional worlds. This is because our natural response to encountering a humanoid is informed by a century of exposure to its fictional counterparts. So, when the robot fails to respond as intelligently as we expect, we are confused and disappointed by the experience.
Put simply: we want more from these robots than most can currently deliver. This – combined with their expense – is one of many reasons for the poor uptake of the handful of high-end humanoids that have been commercially available this century. But in isolation, expense is less of a barrier to robots’ progress than the lack of a clear and viable purpose.
In a 2025 research note on humanoid robotics, multinational bank Morgan Stanley noted that social acceptance will be a key factor in humanoids’ adoption:
Robots with human-like proportions often feel more approachable and easier for people to interact with, which can help reduce friction during workplace adoption. Over time, these social dynamics may ease deployment and increase the viability of humanoids in customer-facing or collaborative roles.
But is that true?
Robophobia: fear of robots
While many people are fascinated or entertained by humanoids, others are intimidated or scared by them, and the machines’ negative or sinister depictions across a century of speculative fiction have done little to help.
While countless surveys have shown that people worry about robots and AIs stealing their jobs, the fear of robots themselves – robophobia – is less frequently measured. But it is not unknown: in 2024, United Robotics surveyed 8,000 people in the US, Canada, France, Italy, and Germany. It found that while eighty percent of the public are nervous about robots taking their jobs, 60% don’t want robots to resemble people at all, including having arms, legs, heads, and faces.
This tells us something important: if a human-like form is intrinsic to the design of a useful, generally intelligent robot, as many technologists believe, then people’s fear of humanoid machines needs urgent consideration. Failure to do so may damage robots’ commercial potential at the time they are most needed.
In March 2026, another significant player in the market, Hexagon Robotics, surveyed attitudes to robots among 18,000 adults and children in nine countries: China, India, Brazil, South Korea, Japan, the US, the UK, Germany, and Switzerland, making it the most extensive such study carried out to date.
Hexagon’s Robot Generation report found that Britons are most worried about robots, with over half of respondents (fifty-two percent) expressing their fear of the machines, versus forty-five percent in the US, forty-four percent in China, and just twenty-nine percent in South Korea, which remains the world’s most highly automated nation. The risk of a robot being hacked and compromised was the most frequently expressed fear among all nations – rather than concerns about lost jobs and human replacement.
By contrast, eighty-one percent of Chinese respondents expressed their excitement about a robot future – despite over half of that number being fearful – versus 54% in Japan, 49% in the US, and 47% in the UK. Switzerland was the least excited about the robotics revolution, with just 25% of the populace expressing their enthusiasm.
So, why the discrepancy between the fearful UK and the optimistic China? One reason may be Britain’s endless tabloid scare stories about robots, but another is clear from the data itself: seventy-five percent of Chinese citizens reported having seen or used a robot in real life, versus just thirty percent in the UK, so robophobia is partly rooted in the unknown.
Despite its long industrial heritage, Britain’s factories have largely failed to modernise and adopt new technologies, with a robot density that is far lower than the OECD average and behind most other developed economies. So, the fact that robots represent the unknown to most Brits is hardly a surprise.
Hexagon Robotics – which makes the AEON industrial humanoid, which is being trialled in BMW’s factories – says:
Our global study finds robot anxiety is context dependent. It’s highest where robots are least visible and falls when people can see robots working safely alongside humans.
China and South Korea have some of the highest robot densities in the world, placing them further along the adoption curve. In these markets, greater exposure translates into noticeably lower levels of worry and much greater excitement. This pattern points to a clear relationship: as robot density rises, so does familiarity and with it, more positive sentiment. Exposure and attitude reinforce and accelerate each other.
In China, 66% of respondents said they would be comfortable interacting with a robot in a factory or industrial setting, versus just 53% of British workers, who are more worried about their jobs than their counterparts in China, where economic growth has been four times higher in recent years.
So, what other lessons did the Robot Generation study offer? Visibility, familiarity, and a clear purpose for robots all help build trust and acceptance, suggests Hexagon. Acceptance is highest in industrial settings and lowest in personal spaces, while governance, accountability, and control will shape public confidence in the long term. It explains:
Anxiety falls when people can see robots working safely alongside humans, doing clearly defined jobs, with strong safeguards around data and decision-making,
That sounds like common sense, and it tells us that those vendors which are backing a message of ‘robots in every home and doing every job’, with an anti-regulation drive thrown in for good measure, are pursuing exactly the wrong strategy to make people trust and accept them.
Burkhard Boeckem is Chief Technology Officer at Hexagon. He says:
People are not having a single abstract debate about robotics. They are making practical judgments about where robots, in all their form factors, belong, what they should do, and how securely they are governed. Anxiety grows where robots feel invisible, poorly understood, or out of human control.
He adds:
Trust is built through experience and clear boundaries. When people understand what robots are for, and what they are not, confidence follows.
Commenting on the findings, Michael Szollosy, Research Fellow at Sheffield Robotics and in the Department of Computer Science at Sheffield University, says:
When people actually meet a robot, especially a small, friendly one, the fear often disappears. You can almost hear them think, ‘Oh, that’s not going to take over the world.’ If scientists and engineers want people to come with them on this journey, they have a responsibility to explain why these technologies exist and what they're actually for. If you don't take people with you, the counter-narrative sticks and once that happens, it's very hard to undo.
Tesla CEO Elon Musk recently suggested that the Optimus robot represented an “infinite money glitch” for his company and would – by some unspecified means – eradicate poverty and make every job into an optional pastime. Yet Hexagon’s findings suggest that a robot army of faceless, six-foot humanoids that are, quite consciously, positioned to replace every human job is unlikely to usher in universal acceptance and the billions of sales that Musk forecasts.
Tesla is also pitching Optimus at the domestic market, which is high risk and completely unproven – rather than at industry, where the likes of Boston Dynamics, Hexagon Robotics, Apptronik, Agility Robotics, Sanctuary AI, and others, are already making inroads.
Will better communication help?
That aside, the more we can instruct and sometimes converse with robots using natural language, the easier it may be to integrate them with the human world and begin to trust them – as we have done with our digital assistants and smart devices. If linked to a robot’s world model and general intelligence, a natural language ability would bring the century-old vision of a general-purpose robot closer to reality.
Also, being able to tell a humanoid ‘cobot’, “Go to Bay 1, pick up those boxes, and stack them in the big truck over there', or to instruct a robot in a disaster zone, [Go into that building, look for survivors, and get them to follow you”, would allow human operators to set aside their bulky laptops, joysticks, headsets, and haptic interfaces.
That said, a natural language interface would be little more than an entertaining distraction if the robot fails to provide any useful function.
My take
Yet despite all this, many people still find the experience of interacting with humanoids sinister, creepy, or intimidating, which leads us into another area of debate.
Should robot designs accentuate their machine natures – like Boston Dynamics’ Atlas, Figure AI’s robots (currently at version 3), Optimus Gen 2 and its successors, Unitree Robotics’ G1, Agility Robotics’ box-carrying Digit, Apptronik’s Apollo, NEURA Robotics 4NE-1, 1X’s NEO, or Sanctuary AI’s Phoenix? Or should they be made to look more like artificial humans, like Engineered Arts’ Ameca or the lifelike Sophia and Erica androids?
The United Robotics research suggests that the more human our machine counterparts appear, the less people will accept them. This may be a challenge in the years ahead as our need for robot workers grows.
Meanwhile, the Hexagon Robotics research surfaced another useful finding: children and young adults are far more accepting of robots than their elders. As populations age and birth rates fall, this may present a further problem for technologists who are seeking to fill the labour gap: we are designing solutions for the elderly, in some cases, but it is their grandchildren who are more likely to embrace them.