Main content

Lessons in professional AI - how to adopt it safely in manufacturing, legal services, and architecture

Chris Middleton Profile picture for user cmiddleton September 15, 2025
Summary:
Listen to some government ministers and you would think that simply buying AI leads to instant growth and productivity. We hear from three very different sectors about the challenges and opportunities they face.

Young female engineer at work with digital UX © metamorworks - shutterstock

Like the US and China, the UK – world number three in backing the technology – wants growth and a productivity boost from adopting Artificial Intelligence (AI). But is it that simple?

As we see from my previous reports this month, there are real challenges for business leaders when it comes to vendor trust and ethics; trust in AI's outputs and accuracy; employee demands for greater understanding of the technology (rather than lectures about how it will make them more efficient); and the poor results of many enterprise AI projects.

So, how to square all that technology risk and complexity with the apparent simplicity of government policy: namely, add AI and trigger instant growth and productivity?

One big-picture challenge is that UK Prime Minister Keir Starmer's government has a PR problem when it comes to AI. His administration is seen as being too close to US vendors, such as OpenAI – with whom it drew up a strategic agreement last month.

That perception is not helped when, in the same month, the Prime Minister appointed Jade Leung, Chief Technology Officer of the AI Security Institute, as his new AI advisor. Until 2023, Leung was Governance Lead at OpenAI, a company whose ethics and transparency have been called into question several times this decade.

Plus, No.10 angers Britain's media and creative sectors by expressing a strong preference for tearing up copyright rules for AI training, even before launching its public consultation on the matter.

And unfortunately for the Prime Minister, his problems don't end there. The public is eight to one in favour of AI giants paying for data, rather than being allowed to scrape it for free. Meanwhile, Ipsos UK research this month finds that British employees are more sceptical about AI than their overseas competitors.

Put all this together, and the UK Government's near-evangelical belief that AI will turn around the economy with ease rings a little hollow.

So, what to do about it? How can AI be adopted safely into both the public and private sectors without losing the support of the workforce? These are among the questions for a recent Westminster Employment Forum conference on AI policy.

Manufacturing leads the way with worker engagement

Nina Gryf is Senior Policy Manager at trade association Make UK, which represents 20,000 manufacturers, from the biggest automotive companies to smaller players in sectors such as beauty, food, and drink.

As Gryf describes it, some of her members are at the cutting edge of hard AI, rather than using it for the trivial purposes that other organizations do as they play with shadow-IT deployments of generative tools and chatbots. She explains:

Manufacturing has been using AI for a long time, but adoption has accelerated in the last decade. We are evolving from basic automation to predictive maintenance, quality control, and process optimization.

But she adds:

Here in UK, we have great research institutions, great innovation, and cutting-edge technologies. However, we lag behind our international competitors in tech adoption generally, and in AI adoption as well.

Make UK's 2024 report on AI in smart factories finds that only 36% of its members are using AI at the heart of the business, mainly in maintenance, quality control, process optimization, and back-office tasks. So, why are others trailing behind their peers, with the UK being one of the lowest adopters of Industry 4.0 systems in the developed world?

One of the key drivers of success is culture, says Gryf:

Apart from the wider implications of industrial policy, like access to finance or skills, we see that culture and leadership play a major role in how these technologies are being invested [in the business].

For example, we have manufacturers that are small businesses, but very agile and engaged with their workers. One of these purchases some robots recently but invites the workers, the welders, to be part of that purchasing and decision-making, to avoid their fear of losing their jobs. Now those workers are being reskilled and upskilled and are managing the robots, and everyone is happy with the results.

Great news. So, what are the lessons? She explains:

Ethical adoption requires clear governance, human oversight, and transparency, especially when the investment is starting. It's about building trust with employees.

That's excellent advice: see AI and robotics adoption as a bottom-up, discursive, community-building process, rather than a top-down one that should be imposed on workers, many of whom may be sceptical or fearful of the future.

Legal and creative sectors face unique AI challenges

Another sector that might be transformed by AI – and partially automated, perhaps – is legal services. However, there are significant challenges. For example, since the dawn of ChatGPT, numerous stories emerge – over 150 to date – of US lawyers presenting AI hallucinations in court in place of genuine precedent. And, as I report earlier this month, 2025 research from Stanford University finds that even dedicated legal AI tools – those trained on trusted, industry-specific, fact-checked data sets – hallucinate to an unacceptable degree.

Kent Reynolds is Senior Dispute Resolution Solicitor at legal practice Jonathan Lea Network Solicitors. He acknowledges the challenges, adding that recognizing them is essential for success, especially when clients may be sceptical of professional adoption of AI on their behalf, not to mention the privacy and confidentiality issues that may arise from it.

He says:

Of course, AI can carry out legal research. But there is a big caveat about it, because while it can be very useful in finding legislation and reporting on caselaw, it does need to be double-checked, because it is very much a people-pleaser. Therefore, it will return results that may not be accurate, but which will look as though they are plausible.

Reynolds adds:

There is a saying that 'the only thing worse than not knowing what the law is, is knowing the wrong law'. So, because of the way ChatGPT is trained to provide an answer come what may – and may provide a plausible answer – it always needs to be cross checked."

But that is a massive challenge when, as he puts it, people want to "use AI as a knowledge bank for all information". At present, it is clearly not safe to see it as that. And, as the Stanford University findings suggest, the problem may not be accurate training data – after all, the legal sector has centuries of detailed precedent – but Large Language Models' (LLMs') tendency to hallucinate even when trained on trusted, authoritative sources.

Put another way, AIs' simulation of human-like intelligence can be seductive, but it is dangerous to trust it without the kind of fact-checking and grunt work that overworked lawyers would prefer to avoid. In this sense, AI poses risks to any time-poor professionals who are looking for a short cut.

A sector that is little heard from in AI discussions is architecture and industrial design. Adrian Malleson is Head of Economic Research and Analysis for the Royal Institute of British Architects (RIBA), which has 50,000 members in the UK.

He says:

Our profession straddles several different industries. We are part of the construction industry. We are a professional service. We are part of the creative industry. And we are a service industry, working with people.

But increasingly, we are also part of the manufacturing sector, if you think of the complexity of building products that are now required to fulfil architects' designed intent. And we see that element growing, getting closer as AI develops."

In this sense, Malleson sees AI as both an opportunity and a threat:

The architecture profession has a strong record of adopting innovative technology. We see that with computer-aided design, and then with Building Information Modeling. In 2024, around 41% of our members had already used AI for at least the occasional project. But in 2025 that grows to 59%.

Impressive. But he adds:

Some of that use is relatively superficial, but some is deep and highly innovative. That includes rapid iterations of design, testing a design for its effects on both the building and the wider context, which is societal and city wide.

An advantage, then. So, what are the threats?

So far, only three percent of practices see staff reduction, but 18% expect reductions in the next two years. But the biggest risk that our members perceive is that of imitation.

There's AI scraping the internet for existing buildings and designs, and those are being offered up in rehashed versions through AI. Plus, there is providing data to AI models and that being reused. And consequently, there are also risks of displacement.

And I think the latter is a risk that may be shared among all professions, in that the role of privileged knowledge holder may be under threat through information being available and processed more widely.

This is a valuable perspective. While the arguments for and against scraping copyrighted data in, say, fine art, illustration, music composition, filmmaking, and book authorship have been well rehearsed, the concept of scraping an entire building and the vast, big-ticket creative work and engineering calculations that go with it need greater consideration. And that is especially true if a generated design proves to be unsafe because it contains dangerous structural flaws.

But the flipside of this is that trained, ethical, experienced professionals might use AI to work more efficiently, Malleson says:

There are clear benefits on offer, including the 50% of our members who see AI as an opportunity for the profession to meet the demand for more and better buildings. A lot of buildings are required, particularly overseas, and we all know that our building stock needs significant work. Indeed, we are in the midst of a housing crisis.

But there is a strong belief that AI won't displace certain attributes, and these are around professional judgment – though I think that is generalizable across a range of professions. There does seem to be a perception – and we'll see how it plays out – that AI, at least as it currently stands, won't replicate or replace professional judgment.

And similarly, in the creative industries generally, there is a persistent belief that AI models can't be – and will never be – creative.

So, does Malleson share that belief? He fails to address the question directly, but raises some interesting points:

A long-term threat is the perceived constant of the need for human responsibility. Someone must be responsible for a building design – Grenfell [a poorly specified UK tower block which caught fire, causing the deaths of 72 people] makes that very clear. Lack of responsibility has significant and life-threatening consequences. And that is unlikely to go away.

Similarly, there's a strong feeling that human relationships and client management won't be replaced, and that the human relationship element can be very broad.

Think of the competing needs and demands that people have for buildings: the demands of the clients, of people in the area, of long-term sustainability, and the demands of government priorities. These are all competing, and it is often a human architect who mediates between them."

My take

Three fascinating perspectives, which expose the myth that simply buying a technology fixes complex problems – for professional sectors, and also for ambitious countries.

Loading
A grey colored placeholder image