Main content

Executive Intelligence podcast - IFS’s Kriti Sharma on harnessing AI for good and for industry

Phil Wainewright Profile picture for user pwainewright November 25, 2025
Summary:
What's the real-world impact of AI? In this podcast, Kriti Sharma talks about making a difference with AI, and her hi-vis field work as the new CEO of Nexus Black, which helps IFS customers bring AI to industrial settings.

 

Kriti Sharma has led AI teams in banking, legal and software firms. But for the past few months, she's been donning a hi-vis gilet, hard hat, boots and safety glasses to bring AI to vastly different industrial settings. She's brimming with enthusiasm for her new role as CEO of IFS Nexus Black as she joins me for this Executive Intelligence podcast, which you can listen to in full by clicking on the link above. She says:

I'm so inspired right now, Phil, by building AI on the edge, in the plants, at the sites, out in the fields. Somebody's hanging out doing their work on a transmission pole. Somebody's on an offshore farm. Just trying to get the work done and keep the assets alive out in the field.

Those problems are fundamentally different to creating AI solutions for people who work at their desks, like you and me. These problems are, in many ways, a lot harder and a lot different. I do believe AI in this industrial world, and the opportunities it brings and the value it can create, is under-hyped, which might be a controversial thing to say, given how much people are worried about the hype around AI and the bubble.

I think we're not truly understanding the extent today of this invisible revolution, where AI is keeping the compliance work we're doing, for example, in commercial aviation that keeps passengers safe, or the work we're doing in optimizing the servicing of plants and sites and assets, and broken things out in the field to get them right the first time. This is really bringing in the power of AI in the hands of the technicians, the engineers, the people who make the world go a bit more smoother.

The setting may be different from her earlier roles, but the determination to make a difference is just as strong. Sharma is something of a high flier, the recipient of a Google Anita Borg Scholarship while studying an MA in Advanced Computer Science at St Andrews University in Scotland, and subsequently named as a UN Young Leader and an advisor to the UK government. She's led the creation of AI and data products at Barclays Bank, Sage Group and Thomson Reuters, and also found time to set up the non-profit organization AI for Good in the UK in 2018. Earlier this year, she became CEO of Nexus Black, a new organization set up within enterprise applications company IFS to co-create industrial AI solutions with customers.

This isn't a traditional consulting function, she hastens to point out, more of a Forward Deployed Engineer (FDE) service that pioneers products for IFS, built around customer needs, and able to evolve over time. She tells me:

If you were to wanting to hire a consultant, I would be the worst consultant on the planet. You've seen my background. I don't know how to do that. I'm a builder of products that scale and solutions that work. So we go on the ground, we learn, understand the needs. We deploy that new product. They become part of the wider IFS offering over time and and we continue to invest in those, versus bespoke or consulting type solutions...

We have to be able to find solutions that adapt and iterate, and it's not a one-and-done, you come here and you build me a solution, and you go away. It just doesn't work, because then we're left with an old technology that might be, I use the word legacy, a month later, and that's what we are fighting against.

It's also important to make sure these are reliable, trustworthy solutions, adhering to principles that Sharma once outlined in a TED talk on Responsible AI. She's highly aware of the need to manage how autonomous systems are introduced:

We don't go full autonomy, full autonomous systems, fully agentic systems, on day one. I do think people who do that, companies who do that, without thoughtfully looking into the implications, are being reckless. We don't do that.

We identify a process that needs to be automated. We define a very clear evaluation set of what would make it reliable, compliant, safe, scalable, right? And when we define this evaluation side — if it hits the evaluation criteria, it goes out in the world. {If it] doesn't hit the evaluation criteria, we continue to iterate.

There needs to be respect for the knowledge that workers on the ground have built up, she goes on, and for AI to work with them rather than imagining it can take over from them. She says:

The problem here, especially in the industrial world, is not to remove the people. It's to bring the best of the human knowledge to be able to do that work. And I don't want anyone to take away that this is the only way AI works is when they're fully autonomous agents. 'We've achieved AGI!' I mean, there's so much obsession with that term. I just don't think it applies in the industrial world in the same way...

The most important thing that will come out of this AI revolution is it will remind us what it means to be human and how we preserve that.

There's much more on these themes in the full podcast - listen now. 

Note: you can also subscribe to diginomica podcasts and hear the rest of our Executive Intelligence podcast series here.

Loading
A grey colored placeholder image