Agentic AI and Customer Support - learnings from 'Customer Zero' as Salesforce talks hearts and brains, customer deflection mindsets, and moving on from a chat bot UX
- Summary:
- The agentic wars are still less than a year old, but some early adopters have battle scars to show already, as well as positive learnings for others to emulate.
Since the agentic AI arms race kicked off - was it really less than a year ago?!? It seems so much longer - the hunt has been on for real-world use cases to validate the theoretical promise posited by humans and agents working in tandem.
A lot of attention has been focused on the area of Customer Support and Service as obvious areas of application for the technology. As Katy Ring notes here, Salesforce, for example, has made itself into its own use case, deploying its Agentforce offering via its help.salesforce.com support page. That passed a significant milestone earlier this week with the announcement that a million interactions had now been handled in this way.
It has, as noted, been a relatively short time since this experiment was put into play, but such is the speed of agentic acceleration that a lot of learnings have been picked up along the way to date. Primary among these is the ongoing importance of Human Intelligence still being in the loop and on offer to those who don’t want to engage with a digital agent or who find that their query is not one that agents are best suited to handle.
This really is a critical message to get across for all organizations. Agents in customer support in many respects start on the back foot following the essentially failed generations of chat bots that came before. Let’s face it - no-one likes dealing with chat bots. Limited by the technology used to develop them, such bots have all-too-often been built as a sort of ‘first line of defence’ to shield an organization from having to talk to customers who have issues and problems that need venting.
That mentality doesn’t fit the agentic era, suggests Bernard Slowey, SVP Digital Customer Success, at Salesforce, who says:
There's a word in this industry that companies use that breaks my heart, and it's 'deflection'. Whenever I hear a customer say, 'Yeah, we're going to do this because it's going to deflect volume’, I think they're coming from a mindset of just wanting to drive out costs…I think if you go at it from it from a deflect volume stance, then people do stupid things. They make it hard to get to the human.
For Salesforce, the use of agents was about launching a new channel on the corporate Help portal to try and get people to an answer quicker, he insists. It’s also about being seen to ‘eat your own dog food’, he admits:
My team always have pride in being ‘Customer Zero’. We should be the best showcase of Salesforce products, so that we can talk to our customers about it, but also give feedback to engineering and product about what's not working. How do we make this thing better? That's been a big piece for us, just gathering these lessons learned along the way and sharing them back with our engineers.
Learnings
That being so, and with a nod to that landmark million conversations engaged with along the way, what are the sort of learnings that other organizations might tap into?
First up, unsurprisingly, is the need to make sure that the Human Intelligence factor is in play. Sometimes you just want to hear, ‘Im sorry - let’s get you straight over to a Support Engineer’, Slowey says:
When we implemented the agents, we very much focused on what we call ‘the brain’. Does it have the right content? How is the metadata? Is the answer factually correct? A couple of months in, what I realized we missed is that in any service industry you train your [human] Support agents on how to deliver service, how to show empathy. So we went back and re-wrote our prompts, took some of the training that our Support agents go on, that soft skills training, weaving that into process.
That was a big learning - it's not just about the brain, it's also about the heart. Are you bringing that? Because otherwise it just feels robotic. It feels like an old school chat bot.
Salesforce still has work to do here, admits Slowey:
One example we saw where customers were still thinking it's an old school chat bot is if you land on a piece of content and you see the 'Ask Agentforce’ icon, clicking that button still feels a little bit like an old school chat bot UX. We actually see a metric that we measure, something called ‘abandons’, which is when people have clicked on the icon, Agentforce greets them, but they just drop out.
When we first launched, our abandonment rate was at about 26%, customers who were just shutting it down and saying, ‘Get this thing out of the way’. To my mind, that's because they're like, 'I hate chat bots and this is one of those'. So what we did was we re-imagined the Help home screen, which is now more of an AI experience. It feels more like a ChatGPT type experience. Now our abandon rate is down about eight or nine percent.
So I think a lot of it is the UX and UI, and we still have more to do with those to help it feel more modern. But I do think people are super frustrated with crappy chat bots, and it's like, 'Oh no, I don't want to ask this thing a question. I'm just going to drop out of it'.
Content
Having the right content is also crucial, he notes. In common with other thought-leaders in AI, Salesforce has made the point repeatedly that one of the biggest barriers to successful adoption is not having a clean and solid data foundation on which to build. Agents can’t work properly if they can’t tap into the right data to tackle problems and provide answers. So putting your data house in order is the first step to agentic success.
Slowey says Salesforce has had situations where agents ran into situations where they didn’t have the right content to hand and equally times when there was too much content to choose from:
There’s some new terminology that my team shared with me - content collision. This is where we had similar articles [in the knowledge case] so the agent struggled to know which source to pull from. We had to clean up a lot of content. It showed us, and I think it’s showing customers, that maybe your content is not in a clean hygienic state, and you need to go and fix that in order to have a great agentic experience.
Then we also had content gaps. People would ask a question but the agent wasn’t able to give a good answer. We realised, in one example, that our developer content was sitting on a different portal, so we quickly migrated that over into the Data Cloud so Agentforce could feed off it and start to answer developer questions better.
Get out of the way!
It’s also important to know when to get out the way and bring humans into play in an engagement. The ‘good’ thing about traditional automated Service and Support systems is being able to just demand, ‘Give me a human being, give me a human being’ down the phone over and over again until you finally get to talk to someone with a pulse. But by that time, you’re in a towering rage and not in a mood for a constructive engagement with whatever organization you’re talking to.
That’s a chat bot legacy that agentic tech needs to shake off, says Slowey:
We have a measure called ‘hand off to human’, which is how many people go from Agentforce to a human. At the start, that was about one percent and we were like, ‘Wow, this is amazing’. What we realized was we made it too hard to get to the Support Engineer. So now we're kind of around five percent some days or seven or eight. I think I should be OK with that, because what we've done now is you can literally start the conversation with ‘talk with human’, whatever language you want to put it in, and we will get out of the way.
We will create a case for you in Agentforce, or if you want, we will hand you over by a live chat with a Support Engineer. For us, it's always been about the agent and the human together. We have incredible Support Engineers, and sometimes they're the better option to solve [the issue] or maybe the customer just wants that. We want to give people choice.
And if you’re the sort of company that reckons that you can deploy AI and get rid of humans - hi Klarna! - expect to reverse your thinking pretty quickly - hi again Klarna!. It’s about humans and agents working in a complementary fashion, Slowey says:
We believe we'll be in a world that will have both. [Since introducing Agentforce], we’ve seen an impact to our case volume, which has meant that we've been able to move Support Engineers into other parts of our business. We were very clear at the start, we were never going to reduce our workforce based on this. We were going to re-deploy roles. And so we did that because we're seeing some capacity open up because of the agent.
My take
The most important learning I’ve taken from the agent wars to date - and speaking from the entrenched position of someone who is rabidly prejudiced against any form of chat bot or automated customer deflection system! - is that agentic AI has amazing potential, but it’s not always going to be the right solution, particularly when you’re dealing with human beings. I find myself nodding firmly in agreement when Slowey says:
AI does some things amazingly well; it doesn't create relationships.
But I also have to concur with the perspective from this ‘Customer Zero’ that the time to experiment with the so-called Digital Labor workforce - sorry, Phil Wainewright! - is now, even if there is some understandable nervousness about the pace of change being advocated. Slowey fairly counters:
Humans make mistakes all the time, but we hold this technology to this higher bar. It's kind of like the self driving car, right? It needs to be perfect before we kind of want it on the motorways, right? But humans have accidents all the time. But I think there is nervousness right now. It is, like, ‘What if I put it out there and it says something wrong?’.
That’s why you need to get your content clean and ready and put the necessary guardrails in place, advises Slowey. Make sure your agents know as much as they possibly can, but also know what they don’t know so they don’t start making up policy on the spot, as Air Canada found out when its chat bot erroneously told bereaved relatives to book flights and claim a refund afterwards. Slowey says you need to take responsibility for not exposing yourself to such risk by having the right principles and rules in place:
If you ask certain questions, the agent is going to say, ‘I can't answer that’ and potentially bring in a human. I think you need to be very deliberate on how you design these things, thinking about that experience at the very center of everything you do...Keep it to your content, put guardrails in place, then I think you start to get more confident, and you'll start to see more and more of agents go customer-facing.
Onwards!