Main content

AI needs to be inclusive by design – here’s how the NHS, Microsoft and GoFibre think it can be done

Madeline Bennett Profile picture for user Madeline Bennett March 30, 2026
Summary:
Audits, augmentation and appeals processes on the cards.

inclusivity

AI is having a dramatic impact on the workforce, reshaping roles, improving productivity, and in some cases causing job losses. As companies transform for the AI era, new challenges are emerging, such as eliminating bias, alongside new opportunities.

At Tech Show London, an all-female panel of AI developers discussed how to ensure diversity and inclusion are an integral part of AI technologies, and how current problems can be overcome.

Shifting DEI (Diversity, Equity and Inclusion) away from being a tick-box exercise is an important step in ensuring AI is inclusive by design, according to Taonga Banda, Cloud Solution Architect, Data & AI at Microsoft, says:

When you talk about diversity, when you talk about inclusion, a lot of the times organizations will consider it at the end, or after something's already been built, and now something's gone wrong, and now we need to change things.

Mind the gap

Instead, companies need to take it a step back and first of all think about the data being used to develop AI. This should cover how data is acquired, whether it’s being gathered from a diverse pool, and acknowledge any gaps in the data. Where there are gaps, it’s about deciding how to fill or manage those for the Large Language Models (LLMs). Banda adds:

What is your data supposed to be representing? Is it supposed to be representing bias? Is it not supposed to be representing bias? Because we're all from different industries, and in different industries there might be certain biases that you actually do want in the data.

Understanding any biases that might be in your data is important, but this goes broader than just the data. Banda adds:

When you talk about inclusion, we're not just talking about data. What about the application of the actual LLM. So when people are interacting with your LLM, whether it's an agent or an application, have you made it in such a way that anyone with any ability can actually interact with this tool?

Following the secure-by-design principles for inclusive AI development would be a good measure, according to Banda, She says:

“When we talk about solutions or we talk about architecture, it's usually security by default or designed by security principles. What I want to see, especially when you talk about inclusivity and AI, is having diversity by design. So when you're designing something, implementing it from the very beginning.”

An area of AI development that concerns Chetna Arora, Head of Software Engineering and Data Science at GoFibre, is the lack of formal auditing for AI-driven workplaces. Arora has been working in the software industry for 24 years, starting out as a Java developer and working for big companies like BT Plusnet and Tata Consultancy Services, before moving to a smaller Scottish company. She notes:

We are now doing things very differently from when I started. We really haven't got any form of audit.

In the telecoms market, for example, the UK regulator Ofcom would be scrutinizing a company’s work and carrying out audits to check for any wrongdoing. Arora says:

We haven't got any bodies that have been defined from an artificial intelligence perspective, where I would get a change I've implemented in my software that would get audited, and I would fear that the audit is going to fail and my product will not launch in life. I am not fearing that. And that is where the organizations are falling short. That's what we actually need. And for those audits, whether quarterly, monthly, yearly, every now and then, there are KPIs that get measured alongside as well.

While audits aren’t in place yet, Arora predicts they will be in place by the end of the decade:

In 2029, we probably all will find ourselves getting marked against it in a way where we think about whether my application would actually pass an audit before I actually make it through, setting the timelines and timescales. it will probably be all written down policies by then already.

In the loop

To ensure AI is fair for all, there needs to be ongoing human oversight of any tools, and transparency over their use, according to Banda. She explains:

People always talk about human in the loop, and I'm all for human in the loop. However, I'm all for human in command. When we're dealing with high-risk cases, it could be hiring, it could be even in hospitals as well, we need someone there that's going to hone in what the LLM is actually doing, the decision-making.

If organizations are making decisions based on an LLM, there should be an appeals process and it needs to be clear how people can appeal. Banda adds:

It’s also making sure that people know what the LLM is doing, what data is being sent. There's some stories about different organizations and different systems where people don't know that data is actually being sent to an LLM and a decision is being made by an LLM. In that case, we need to be transparent, we need to be accountable. Also having a page with how the data model is being trained, where is it hosted. [It’s] security-by-design, diversity-by-design.

AI brings some advantages around equity and inclusion, including the democratization of knowledge. In places where traditionally people wouldn't have the opportunity to do certain things or gain access to insight and education, now they have access to knowledge. Banda says:

Because I'm originally from Zambia, I know now children have access to different textbooks, they have access to different research. Instead of them having to go into a physical library or to buy something, instead they can use ChatGPT or a co-pilot, and they can get access to that information. The range of education, the range of digital literacy is increasing.

Role reversal

Another benefit that AI brings for DEI is that work is moving away from being role-based and will be more skill-based. This will open up opportunities to a wider range of people in the workplace. Anupama Hatti, Head of Programme Delivery for Digital, Data & Technology Services at the UK NHS Blood and Transplant, says:

That helps flatten hierarchies. There's no more traditional conventional hierarchies, and teams are very multidisciplinary. Everyone brings a new skill on the table, and it becomes about that multidisciplinary team.

However, the flip side to this is the impact AI is having on entry-level roles and admin roles, which means people need an extra hand to get into the workplace and progress. Hatti notes:

You can't expect people who have just graduated or who have just finished a course to come in and become a leader. They need that experience. They need to develop. It's a responsibility of all of us currently working on AI to create those opportunities for people to learn and grow and develop and get into those leadership positions where they've gained experience and they've grown through it, rather than just learn something from AI and just try to implement it. That lived experience is very important.

There would also be a benefit from people focusing more on staying in control of AI rather than worrying about its potential to replace us. Hatti says:

In 2029, AI will no longer be a thing that's going to happen, a thing that will be part of our lives. It will be a reality at that point. We as people need to think about not doing things just the AI way, but then think about AI as something that augments what we do. How can it make our life easier rather than thinking that it's going to replace us. We prepare ourselves for that rather than being worried about AI.

At GoFibre, diversity is being introduced into AI products via customer feedback. Twenty years ago, it’s unlikely software engineers and developers would have been going out and talking to customers to get their views. Arora explains:

We're thinking about our products differently, we're now thinking of when this product lands with the customers, how are we going to get the feedback. I've developed MVP, and it's just a version one that's gone live. When I'm thinking of future versions, the feedback that now we're getting is very different than what we got 10 years ago, because we've embedded all of these diverse feedback from our customers into the product that we are building, we've opened those channels. The leadership team has given us tools to go out there, speak to our customers.

My take

Inclusive AI-by-design should be something all organizations strive for, as use of the technology grows and cases of bias continue to surface.

Loading
A grey colored placeholder image