Main content

International Women’s Day 2026 - why is there still a big AI gender skills gap – and what can employers do about it?

Cath Everett Profile picture for user catheverett March 6, 2026
Summary:
A recent report by recruitment consultancy Randstad revealed that women are much less likely to be offered AI upskilling opportunities than their male counterparts. So, what is going on here and how can organisations fix the problem?

AI

There is a clear gender divide between those with AI skills and those without, according to a recent report by recruitment consultancy, Randstad.

The research based on input from 12,000 employees around the world combined with analysis of more than three million job profiles indicated that 71% of men benefitted from such expertise. But among women, the number dropped to only 29%, amounting to a huge 42% gender gap.

While the situation appears to be improving somewhat for the younger generation (66% of men versus 34% of women), it is certainly not the case among older people. In fact, a mere 21% of women (versus 79% of men) who have worked for more than 30 years claimed to be AI-proficient.

A widespread lack of training opportunities appears to be a major contributor to the situation. The study revealed that women were significantly less likely to have been offered occasions for upskilling than their male counterparts (34% of US men compared with 27% of women, and 43% of UK men versus 31% of women). They also felt less confident that any training they had received had set them up for using AI effectively in their jobs.

To make matters worse, only 35% of women said they were provided with access to the technology at work compared with 41% of men.

At the technical AI skills level, meanwhile, women were even more underrepresented. Only 18% of all software developers were female, 19% were involved in AI data processing, and 20% worked with AI-based cloud applications.

Why the AI gender disparity?

So, just what is going on here and what can employers do to fix the problem? According to Jessica Ridella, Global Technology Managing Director at IBM:

The disparity is less about capability and more about how AI systems are deployed and who they’re designed for. Adoption starts on the edge with tech teams and informal networks, and training mirrors those existing power structures.

Akber Datoo, Professor of AI and Financial Regulation at the University of Surrey and Co-Chair of the Technology & Law Committee at the Law Society of England and Wales, agrees. In many instances, he believes that training budgets are being focused on tech teams that are traditionally male, with insufficient funding being allocated elsewhere in the business:

I think there’s an element of safer, short-term bets, or return on investment (ROI), being placed by organizations. When budgets are tight, employers are going to optimize for a fast ROI and low delivery risk. They’re prioritizing teams that are already doing data-heavy work, cyber work, those already closest to automation and, therefore, perceived to have the technical confidence. And because the teams are more male-skewed, it unfortunately becomes self-fulfilling as a loop in terms of access…So, what you’re finding is that small allocation biases are causing large outcome gaps.

Another challenge, particularly in organizations that do not have clear AI use policies, is that women tend to be more concerned about the ethics of working with such tools, fearing they will be penalized professionally for doing so. Men, on the other hand, are often happier to “rush straight in” and “give it a go”, says Datoo, resulting in them building up experience with the technology more swiftly.

As to why older women in particular are falling behind in AI upskilling terms though, Ridella believes they face additional “barriers in terms of confidence and psychological safety”, especially if tools are framed as being very technical.

The implications of exclusion

But there are implications to this kind of exclusion, even if it is not meant to be actively discriminatory. Datoo explains:

Short-term, it’s about unequal productivity gains and unequal visibility as to who looks effective in what they’re doing. I think there are also big risks around shadow AI, so people using AI without disclosing its use. This creates risk and unequal learning by doing, and issues when people get it horribly wrong.

Into the medium-term though, he says:

You’re likely to end up promoting those that seem to have AI fluency. This means that not giving people early access on training becomes career-compounding. Adoption stays uneven. Tools and workflows are built around the users that show up. At the entry level, I do think AI will have a massive impact on the jobs available, so you’re not going to have people coming through the pipeline as well.

Into the longer term, Datoo believes there is a risk that chunks of the workforce will not have the necessary skills to operate effectively in an AI-augmented workplace. This, in turn, could result in employers:

Sleepwalking into all sorts of employment regulatory risks. If we can’t sort out access training patterns and how they map onto protected characteristics, such as gender and age, are we creating both indirect and direct discrimination risks?

Therefore, he advocates that organizations:

Stop treating AI fluency as an informal hobby. It’s hygiene. You need baseline training, safe tooling, and measured competence with inclusion very much designed in. We’ve got to start where the trust starts, so cyber and data basics are really, really important. Then giving permission to use AI and safe enterprise tools, thereby removing fear and shadow AI, which exacerbate the gender gap. Role-based AI literacy pathways would also be very useful. Making sure we’re designing AI literacy by use case is going to be really important. This means you train based on how people will use the technology in their day-to-day activities as opposed to letting them figure things out for themselves.

Democratizing AI access

Ridella agrees. She believes the quickest way to democratize access is to embed AI into workflows rather than make the technology available as separate tools, so that employees use it by default:

You have to ensure that AI experience and adoption aren’t limited to fringe groups as that just leads to more inequity. So, allowing AI experiences to happen organically throughout the company rather than just focusing access on tech people or small innovation groups makes a big difference. Also, putting AI in everything will reduce the gap and enable people to get hands-on experience of the technology.

David Churchill, Chief People Officer at digital transformation consultancy Version 1, takes a similar stance:

If I just take our Microsoft Copilot licences as an example, we’ve opened them up to everyone in the organization. We’re not choosing certain disciplines to get access and others don’t. There’s no gatekeeping, no selection process etc. Anybody can access it. But what we’re finding is that more women are taking it up proportionate to men. That doesn’t mean absolutely more women. It means as a percentage of all the women in the organization. This tells a really important story: if you remove the access barriers, women won’t lag behind. They lead. So, the willingness to get involved and use AI across all disciplines is there.

A key consideration here though, Churchill says, is that AI upskilling should be considered a change program rather than simply another training initiative:

It’s about leadership. The world of work is changing, and people need to be led through it. Skills have to be put through an AI lens and see how it changes what we do, and that involves redesigning roles and work. It’s not just about implementing new technology. It means ringfencing budget and allocating the time and space to allow people to learn in multiple formats. It’s also important to create psychological safety, and leadership is fundamental to that in demonstrating how it’s done.

Another important point here, he believes, is using accessible language so as not to scare non-techies off:

It’s about framing AI as not being something that only highly technical people can work with. So, when offering up-skilling opportunities, it’s about moving away from using technical language like ‘data science’ or ‘machine learning fundamentals’, and using everyday language, such as ‘AI literacy’, instead.

My take

The key takeaway here appears to be that low levels of AI adoption among women is less to do with lack of confidence or an unwillingness to work with the technology. It is much more about having access to safe opportunities to do so. And a key way to provide such opportunities, even if training budgets are tight, is to build AI into workflows so that it simply becomes part of the job.

Loading
A grey colored placeholder image