Thomson Reuters CTO Joel Hron responds to AI opportunities and stock shocks
- Summary:
-
A partnership with Imperial College London points to a calm and intelligent adoption of AI for the information giant
Stock market falls create great headlines, but they rarely enable the average citizen to understand the full nuance of the sector that is either climbing or falling. This was definitely the case in the early days of February 2026. AI firm Anthropic announced a legal plug-in for its Claude generative AI copilot technology, and the stocks of stalwart information and technology services companies in the legal sectors tumbled.
Among those to be impacted was Nasdaq-listed Thomson Reuters. CTO of Thomson Reuters, Joel Hron, and I had an interview in the diary for the days following the stock market tumble. The reason for our interview was a partnership that Thomson Reuters had inked in December 2025 to conduct major research into AI with London’s Imperial College. This partnership is central to the issue behind the stock market performance, and more importantly, how digital leaders and their organizations consider, adopt, and use AI in the enterprise.
A day in February
Anthropic announced that a legal plug-in to the Claude Cowork generative AI tool provided enterprises with an automated legal tool for contract reviews, creating non-disclosure agreements (NDA), legal response templates, and more.
The legal plug-in uses a Model Context Protocol (a set of rules for contextual information) to provide a two-way connection between external systems and the AI toolset. These external systems consist of the Box or Egnyte content management systems, Jira for project management, comms tools from Slack and Microsoft 365, and contract information from Pramata, a contract management technology.
This announcement led to the drop in the stock values of professional publishing and tech services firms Pearson, RELX, Wolters Kluwer, and Thomson Reuters, the latter by 18%, it was reported. Asked about these events, CTO Hron says:
The stock market reaction was because of a lot of anxiety. It misses the point of the fundamentals of what drives AI forward. You can have general-purpose AI that can get you to 90% of an answer really quickly, but to get to 99.9% requires expert information.
One of our core differentiators is decades of expertly curated content in expert domains of the law, tax, as well as decades of software tools that experts rely on. We really believe these are critical ingredients to the use of AI in a trusting and explainable way. So the reactions of that week missed what a good AI system actually looks like.
Trust is a word that Hron uses throughout our interview, and it is central to the partnership with Imperial College. I asked if the approach of Thomson Reuters was about protecting that long heritage in content creation. Hron says:
It is not necessarily about protecting the content, it is to ensure that AI continues to prioritize the elements that are important to the industries we serve, these are: trust, truthfulness, auditability, and verifiability. So it is important to steer the development of AI in a direction that enhances these things rather than dilutes them.
Hron admits that 90% of an answer is good enough for many, but when it comes to the law, taxation, and accounting, the answer has to be completely accurate. That final percentage of accuracy is where Thomson Reuters and its research partners are focused on, the CTO says. Adding:
We want to bring transparency and to build trust with the users. I think there is potential to offer a more tailored service to the industries that we serve.
Some of that trust, he suggests, will come from the user knowing and trusting the system behind the AI. Generative AI models rely on a system of different large language models (LLM), and organizations need to know that the models are accurate, what the costs of using that model are, and whether there is a latency issue. He adds:
Our own models will plug into all three of these dimensions and help with accuracy, costs, and latency.
Thomson Reuters has 150 years of history behind it, and in the legal, tax, accounting, banking, and media industries, it is considered one of the foremost resources. As Thomson Reuters moves into the AI age, it describes itself as an AI company, and that heritage will become ever more important, the CTO says:
Provenance runs deep through the DNA of our company and what we consider to be important.
He adds, that this is not a claim the new generation of generative AI firms can make:
The most cited resource for AI is Reddit. There is no lawyer in the world that is going to send a brief to a courtroom based on content from a Reddit forum.
If AI cannot be trusted, the risks to organizations are high, and enterprises need to know how this new way of working operates, he adds:
You need to understand how the model is reasoning. The trajectory of coming to an answer, and that really gets to the point of what a lawyer wants to know. It is not just what the answer is to a legal question, but what is the sequence of facts and logic patterns that the model traversed to get to the answer, because these are not ephemeral things that people make up. It is what is taught at law school as to how to build fact patterns, how to trace elements, and there is a good way to do that and a bad way to do that.
If I can’t tell a lawyer where the AI system reached a conclusion, if there is no traceability or auditability or accountability, then there is still a lot of work for the lawyer to do to find the breadcrumb trail and to reach a conclusion.
Hron’s point is well made, organizations tell diginomica that they are seeing small productivity benefits from AI, benefits that will easily be frittered away if staff have to double-check every piece of work. If you cannot see how AI has worked, how can you check it? This would be unacceptable of a junior member of staff. Hron adds:
We hear all the time from our customers, if we cannot understand why an AI system says something, then it doesn’t save time. They then have to go and find all the information so they can stand up in court or speak to a client about an M&A transaction or deal with a tax situation. You have to be able to defend what you are saying and be able to support it.
AI research and change
In December 2025, a five-year partnership with Imperial College London was set up to create a joint Frontier AI Research Lab at the European university. This research will focus on the safety and reliability of AI and train a foundational model with Thomson Reuters. Hron says the partnership will:
Bring an industry recognized and defensible research arm to transparency, auditability, and verifiability.
We also see it as a really positive way to build a community around these topics. The other bit is talent. We have our own R&D teams, and we see this as an excellent way to leverage the talent within Imperial and support their programmes.
Backing research projects is indicative of a company that sees the opportunity AI offers its business. Hron says:
A lot of people have viewed legal, tax, and compliance industries as somewhat static and resilient to change. My experience in the last three years is that they are really hungry for change and for different reasons. Legal has been really eager to change. They have a huge amount of information synthesis to do, and these models are very good at that, so there is a clear quality of work, and acceleration of value.
On the tax and accounting side, they are really struggling with an aging workforce and fewer people are entering as CPAs, yet there is more challenging work to be done. The tax codes are only getting more complicated and not less so. They are really facing a crunch in terms of resources. AI is an enabler to fill that void.
He said Thomson Reuters continues to invest in being a change management partner to its clients, as organizations need a lot of support in the adoption of AI. He adds:
AI may be hyped, but it takes the fullness of time to realise the change that people are talking about.
My take
Stock prices are really only a measure of sentiment, and Thomson Reuters is right to focus on what matters most - the truth. In an age of polarization, misinformation and “alternative truth,” the accuracy and provenance of information is more important than ever. AI has the potential to exacerbate misinformation or be the tool that makes good information easily available. Starting, as Thomson Reuters is, within enterprise sectors is an important step.
It is also good to hear Thomson Reuters is not just putting AI tools out there, but with this research and a focus on change management, it is attempting to help organizations with the change management AI requires. Asked about the now infamous MIT State of AI In Business 2025 report, Hron says it has been largely helpful:
It highlighted the point that change management is important. If there was anything bad about that MIT report, it was that it gives a lot of people an excuse to say, ‘a lot of people are not being successful with AI, so I don’t need to worry,’ and that is the wrong takeaway.
For those of us in the information sector and those who depend on good information, Thomson Reuters has always been a standard for quality and innovation. The challenge and opportunity is to remain so in an AI-led economy.