Robot Futures #2 – a safe robot is one that talks to other robots? Discuss!
- Summary:
- If you want safe robots, then they need to talk to each other, explains a robotics innovator. What’s behind this counter-intuitive idea?
In a world informed by sci-fi dystopias, the idea that a safe robot is one that talks to other robots might be hard for many people to grasp. But aside from the challenges of creating a robot brain, a world model, visual intelligence, and systems for object translating verbal commands into actions – all of which were explored in my previous report – there are two further foundational blocks in the race to build intelligent machines.
The first of those is safety, not just of humanoid robots, but also of every kind of cyber-physical system and autonomous device. And the second is the ability for robots to communicate with each other – not in the sinister way imagined by movies, but simply so that different robots can interoperate in shared physical spaces without colliding or causing accidents.
Dr Andrew Singletary is a key figure in both zones. As CEO of software company 3Laws – named in honour of Asimov’s robot stories – he is in the vanguard of safe autonomy for all cyber-physical systems, whether they are humanoids, industrial machines, service robots, autonomous vehicles, or aerial and seaborne platforms, such as drones and self-piloting boats. In each of these cases, safe autonomy is critical to limiting the machines’ ability to cause accidents or harm, he says.
But for different types of robots to work safely and effectively together, they also need a common way to communicate – to tell other robots where they are, and what they are trying to do. This week, I spoke to serial entrepreneur Singletary about both initiatives.
The safe robot
Despite its name, 3Laws’ technology covers four main areas of robot safety. These are:
- Collision avoidance, in which the internal 3Laws Supervisor adjusts motion commands to prevent accidents and damage in real time when the robot detects obstacles, including people. The module sits between the autonomy stack and the robot, explains the company, intercepting and modulating commands to ensure that only safe actions are passed down to the robot’s actuators.
- Geo-fencing, commonplace in drones, which restricts a robot’s movement to designated areas of a facility, such as a factory or warehouse, where it has been authorised to work. Control Barrier Functions ensure the robot stays within a pre-defined safe region without impacting productivity (it still needs to do the work!).
- Instability safeguarding, which monitors a robot’s balance to prevent it tipping over in unstable conditions – the risk of a humanoid robot falling on a human is identified in multiple assessments of market risk, such as the IEEE’s recent report on the need to develop humanoid standards.
- Fault management, which automatically detects and responds to system errors, minimizing robot downtime and operational risk.
The complex maths behind these systems was invented by the company’s co-founder and Chief Scientist Professor Aaron Ames, a control theory and bipedal robotics specialist at the California Institute of Technology (Caltech). The original context for his work was the need to develop provably safe adaptive cruise control for automobiles. Singletary and fellow Caltech student Thomas Gurriet, who is now 3Laws’ CTO, wrote their PhD theses under Ames’ supervision, expanding his groundwork to enhance a variety of different robotic systems.
Together, Ames, Singletary, and Gurriet founded 3Laws to bring the technology to a mass market – “to any system that moves”, says Singletary, who explains:
The Control Barrier Function was a new type of control technique where you can prove something about the system that has one of these in its Controller. My PhD involved working with industry partners to understand what problems they were having with the safety of their systems, then using the new algorithm framework to address those problems.
What types of robots did his PhD address? He says:
When I talk about robotics, I talk very broadly in terms of any cyber-physical or semi-autonomous system. At the time, we were working with automobiles. And we were working with leisure boats, where humans were operating them, but we wanted to intervene in case the human was going to run into another boat or onto a sandbank.
And we were working with more traditional robots too, ones moving goods around logistics facilities or manipulators – with pick and place being the biggest application that we see today. So, my PhD was furthering that research and understanding how to make a real-world impact.
He adds:
I had job offers from pretty much every company I'd worked with! But I realised that all these companies essentially wanted the same thing.” So, he and his partners decided to productise it and sell it to them.
Business realities
Thus, 3Laws was born in 2022, with Caltech owning some of the IP from the research, but with the founders granted an exclusive perpetual licence to it – plus some funding from the university. But Singletary found that running a software company was less about the research, and more about other essential areas:
It was how do you build a product? How do you go to market? And how do you certify your product? Because certification is something that is super-crucial to safety in robotics.
At present, some parts of the robotics industry – those that are most visible to the public, perhaps – have switched tack to pursue the development of general-purpose robots, rather than machines that are designed for specific purposes. To what extent might this impact safety?
Singletary is philosophical, saying:
Before the last couple years, robotics companies were picking specific tasks to build robots around. And in a lot of ways that made more sense, because to do one specific task, it's probably best to build a hardware platform that is very focused on that.
But there has been a lot of research on making robust, reliable, general-purpose robots. That does pose a particular challenge for safety, because foundational models are the backbone of intelligence in general-purpose robots, but safety has always been very robot specific, and very application specific.
There is no foundational model for safety. In fact, foundational models inherently need guardrails around them more than other types of autonomy layers.
So, 3Laws offers guardrails for every type of robot – for “any system that moves”? He replies:
3Laws offers a generalisable way to approach safety. If you look across all the platforms we've deployed on, it's the exact same software, but with different configurations, which I think is unique in the safety space.
So, perhaps we can say that 3Laws offers a general-purpose safety layer to match the general-purpose intelligence layers of a foundation model. If so, where does the technology fit among such robots’ World Models, their Visual Language Actions Models, their Large Behavior Models, and other building blocks of general-purpose machines?
He explains:
We don't share our understanding of the world with the autonomy layers because safety poses a unique challenge. It's less about understanding the entirety of the world, like you do for autonomy in a general-purpose robot. It's more about reliably sensing just the safety-critical things, which a general-purpose autonomy layer might not care about. Or you disregard the things that the autonomy layer cares a lot about, but which aren't critical for safety.
He adds:
Perception is at the heart of what we do, because you can't be safe without understanding the world around you. But rather than being a foundation model for safety, what we do is safety-critical perception – a secondary pass at perception separate from the foundation model's understanding of the world; one that emphasises reliability and the detectability of objects.
Behold the standard robot!
So, in such a hyper-competitive world, how can robot manufacturers be persuaded to adopt the same standards, when they might be more interested in pushing their own proprietary systems? Singletary replies:
“We are not attempting to write our own standard. 3Laws is about creating a product that allows your robots to conform to the standards that already exist today.”
But there is a problem: the massive gap in those standards, he explains:
For example, there is no standard [at the time of writing] for humanoid robots. And there is no standard for mobile manipulators either. But as part of 3Laws, I am an active contributor towards the creation of such standards.
For example, there are ISO working groups on the creation of an international standard for humanoids. It's called a Standard for Dynamically Stable Industrial Mobile Robots, which is just a way of saying humanoids, though it does apply to other dynamically stable robots that could be used in industrial contexts.
So, we're doing what we can to facilitate the emergence of standards that define what safety means, though in our case it is about a definition of safety that you can provably enforce on the robot itself.
Bear in mind, if accidents do occur involving robots in real-world applications, having a system of guardrails in place that proves the machine was, for example, geofenced or programmed in collision avoidance might be critical in establishing what happened.
Why robots will talk to each other
For different types of robots to work safely and effectively together, they also need a common way to communicate – to tell other robots where they are, and what they are trying to do.
Boston, MA, based MassRobotics is the world’s largest independent robotics hub dedicated to R&D in the sector. Co-founder and President Daniel Theobald is creator of the hub’s Interoperability Standard for Autonomous Mobile Robots(AMRs), which are self-navigating platforms for moving goods, materials, and pallets around warehouses and factories, especially those built for humans rather than machines – robots that lack dedicated, fixed infrastructures and so must navigate a complex physical world.
As with his original research, which built on Professor Ames’ work at Caltech, Singletary is now working with Theobald to advance MassRobotics’ AMR interoperability groundwork into a standard for all types of inter-robot communication. He explains:
We're putting a group together to understand the common framework on which different types of robots can integrate. It’s an evolution of the work for mobile robots.
Interoperability is a massive problem in a world of competing technology platforms, as anyone with different phones, computers, and cloud accounts knows. Attempting to enforce interoperability among proprietary systems is often complicated and expensive: like trying to build the barn in a field of bolting horses.
However, in a future when robots of different types may be more commonplace in factories, warehouses, hospitals, and other workplaces, it will be necessary for them to communicate with each other on a functional level. Without that ability, robots might impede each other, or otherwise prevent the completion of each other’s work.
Singletary sets out the challenge:
The reality of a lot of logistics centres and other places where robots are used is people want to bring many types of robots together. But there's no standard that exists for how those robots would communicate with each other.
Part of the problem is that while some smart or lights-out factories might be built around the needs of machines, most work environments are designed for humans to navigate. Singletary explains:
Imagine a medical facility that’s using mobile manipulators to run experiments – that's an application we're seeing more and more of. And let's say there's a robot that's blocking the path. There's typically only room for one robot in an aisle, because these were facilities designed around people.
So, how does one robot signal to the other, ‘I need to get to the other side’? And how does the other signal ‘Give me a second, I'm about to finish’? Or ‘This is going to take a while. Find an alternative route’. How do you communicate intent so robots can decide, and agree, what their next tasks should be?
My take
That is the new work on which he, MassRobotics, and a global community of developers are now collaborating. Yet more evidence that robotics – or physical AI as it is increasingly called – is at its best when development is principled, iterative, collaborative, standards based, and committed to solving real problems. A refreshing change from broligarch hype is a key figure in both zones.