Main content

Tencent Summit – why general Large Language Models and chatbots "no longer meet business needs" for enterprise Artificial Intelligence

Chris Middleton Profile picture for user cmiddleton September 18, 2025
Summary:
Someone finally says it – and at a massive Chinese user conference. So, what else did we learn?

Chatbot assistant conversation, Ai Artificial Intelligence technology concept. Casual business man chatting with chatbot via mobile smart phone application while working on laptop computer © TippaPatt - Shutterstock

The pizazz feels welcoming and familiar: the expectant crowd filling a hangar-sized convention hall; a stage the width of a football field; the pounding music and widescreen visuals; the discreet plumes of stage smoke, allowing pencil-beams of light to strafe the air; the headset mics and welcoming MC; and – of course – the C-level 'fireside chat' patois, in which every sentence begins with "So…" and ends with, "right?" ("So, we're building out our capability, right?")

We have all been to these huge tech-industry events. But this isn't some rock star US vendor welcoming customers to the Moscone in San Francisco or the Javits in New York. It is Chinese multinational Tencent kicking off its annual summit in Shenzhen, the high-tech megacity just north of Hong Kong – the exact point on Earth where the old world ends, and a new one begins.

Tencent has been an entertainment and internet company throughout this fast-maturing century, but you would think that it is now an AI specialist, given that even the cloud-focused sessions at its 2025 Global Digital Ecosystem Summit (September 16-17) are dominated by talk of AI, AI, AI (with a side order of 'superapp').

The event explores how coding assistants and AI agents can automate tasks across industrial use cases, while optimized cloud-based network storage can improve efficiency for AI model training and inference. Throughout, the focus is on international markets – or rather, internationalized ones (aka, go local to be global). But the message is clear: Tencent is competing on the world stage. And, beyond it, China is too.

Yet while China is sometimes seen as being six months behind the US in enterprise applications of the technology, there is a clear suggestion that, on one topic at least, China is neck and neck with America: recognizing that the AI hype cycle is ending, and a new era of realism and pragmatism is kicking in.

If you listen carefully, the event's showbiz pizazz and upbeat messaging about AI in the cloud – on the conference stage, in the panel-session huddles, and in the side-room press briefings – are punctuated with acknowledgements that hype is counter-productive when you seek evidenced business results.

Common-sense and reality are creeping back in. And not a moment before time.

China calls time on general LLMs

The first evidence of this comes from Eric Li, Director of AI Commercialization for Tencent Cloud, in a pre-conference media briefing. Lurking in his presentation slides is a bold and simple statement: 

Traditional chatbots, general LLMs, and even optimized systems no longer meet business needs.

There it is, in blue and white, for the world's media to see.

Of course, this has been the subtext of research studies in the West this autumn, most famously MIT's finding that 95 percent of projects return no measurable business benefits. Yet those same big-name chatbots and LLMs have driven popular uptake and investment dollars, which is why nerves are jangling in the US and Europe about an AI winter.

I ask Li about this, in the context of his own presentation, which focuses on helping businesses achieve what they need to do in the best, most localized way possible.

Li tells me:

I think every company in this era has different perspectives on the development of large-scale models. But at Tencent, we always prioritize commercial applications and deployments. Our product team focuses on enterprise-level platform applications.

So, why do we say that traditional chatbots are unsuitable for enterprise applications? It is based on the customer's needs. Some companies just can't afford to make mistakes!

When a model interprets something, there's the potential for bias. So, we need to ensure it doesn't have this bias. Accordingly we are working on an intent model, which is designed to always understand the user's intent.

The internal intent model has hundreds of detailed nodes. For each node, the agent analyzes the intent. Based on each node, the agent generates a different workflow. Therefore, the tolerance for error in intent understanding is extremely low. Our intent model prioritizes the most correct answers.

DeepSeek's reality check

And when it comes to the big picture, China can take some credit for the outbreak of bubble-pricking realism in enterprise AI. DeepSeek's arrival at the beginning of this year makes a nonsense of some US vendors' demand for trillions of dollars in data center capex to run a subscription business that is an order of magnitude smaller. And that is before we even get onto the energy, water, and carbon costs.

DeepSeek reminds the world that China can build things faster, cheaper, and more efficiently than the West – after all, that is why its economy has exploded in recent decades. If you doubt it, remember this: Shenzhen, once a modest fishing town, is now the sixth largest city in the country – a megacity among megacities, with a population nearly 600 times larger than it was in the final years of the 20th Century.

And guess what? Everything in that city is clean, works, and is designed around its people. Even the Summit's estimated 10,000 delegates stream into the event without queuing, despite passing through airport-style security checks and scanners – an achievement in itself.

While super-powerful US AI vendors' message to the world has been 'You work for us now, we have scraped all your data, we own your IP by default', Chinese vendors typically say, 'We work for you and we are open-sourcing all this so you can build your own solutions.' It's a good response.

An oversimplification, of course. Yet evidence of a sea-change in the AI market is all over Chinese breakfast TV news this week. Another vendor, Alibaba, launches the latest version of its Qwen Large Language Model, plus the Tongyi Deep Research Agent. Both offer impressive results and speed – "ten times faster, 90 percent cheaper!" say news anchors – yet the markets barely move until that company also announces a chip deal with China Unicorn.

America, take note: bigger, faster, cheaper AI models are yesterday's news in this country. China is now coming for your AI processors. So, would you bet against it?

Beyond model improvements

But back to Tencent and the Summit stage in Shenzhen. There are further outbreaks of realism about AI, even among the supportive, upbeat guests, partners, and customers, who speak – with apparent sincerity – of Tencent's "humility" in one-to-one business meetings, despite the chutzpah of the event itself.

Catherine Sutjahyo, Group Director of Indonesian digital ecosystem provider GoTo, notes that "Indonesia hasn't found a groundbreaking application" of AI yet, but has a "willingness to try". Sutjahyo adds that "everybody should look at this AI and say, 'How can we use it?'" – surely evidence of the lasting influence of AI hype rather than businesses starting with an urgent business need and proceeding from there.

Lolaire McKinnon, Head of Cloud for racing organization the Hong Kong Jockey Club, notes that generative AI is "cool, but not lifechanging". McKinnon also acknowledges the rise of shadow AI among users, and the challenge of employees sending out material that is obviously AI generated. All familiar challenges in the West, but good to hear such issues called out on a conference stage.

Mikael Suvi, Chief Technology Officer (CTO) of digital games provider Miniclip, observes that AI "is not superhuman, it's just maths", and advises users to think about it that way. Quite right too: talk of superhuman machine intelligence, and genius, PhD-level AIs has – to date, at least – been obvious bunkum. Just marketing noise from the likes of Anthropic and OpenAI.

Even Poshu Yeung, Senior Vice President of Tencent Cloud and Head of Tencent Cloud International, notes that AI is a buzzword, but what everyone is really talking about, Yeung says, is the idea of the superapp. In mainland China, that is Tencent's own WeChat and the Weixin ecosystem, used by most of the 1.5 billion population. Tencent's strategy is to export the enabling technologies for these innovations, rather than the super brand itself.

Meanwhile, in the panel on the financial sector's response to cloud-based AI, Vince Iswara, Chief Executive Officer (CEO) and co-founder of Indonesian digital wallet DANA, notes that AI is all about "diamonds in, diamonds out". In other words, you get out whatever quality you have put in, and not what some vendors have simply scraped off the Web.

Yet as one of my earlier reports this month noted, even LLMs that have been trained on trusted, industry-specific data sets are prone to hallucinate. So, if you trust your training data, you should always doublecheck your AI's output against it.

In short, resist the urge to be lazy. In that regard at least, China is showing the way.

My take

More from Shenzhen in my next report.

Image credit - © TippaPatt - Shutterstock

Loading
A grey colored placeholder image