SUSE – a global leader in enterprise Linux, Kubernetes, and open source infrastructure – announced three updates to its cloud native platform last week. The one that draws the most attention is also the most straightforward to explain: Liz, SUSE's AI agent embedded in Rancher Prime, just got considerably smarter — and considerably more extensible.
Liz – the name is short for lizard, a nod to the gecko that has become something of a SUSE mascot – started life as an AI assistant embedded in Rancher Prime, primarily focused on answering questions about the Rancher environment itself. At KubeCon, SUSE announced that Liz has been substantially rearchitected. Rather than a single engine, Liz now operates as an orchestration layer coordinating a distributed set of specialized agents – covering fleet management, security, observability, and application collection, with more in progress.
In conversation, Peter Smails, General Manager of Cloud Native at SUSE, describes the practical effect in terms of cognitive load reduction rather than AI capability:
What's my CVE posture? Are my applications healthy? Liz went off and looked at – through SUSE security – you've got a couple CVEs on these two applications. Would you like me to see if secure versions are available in application collection? Great. I went off and found these. Here are the Helm chart comparisons. Would you like me to basically update? Done.
Common Vulnerabilities and Exposures (CVEs) are publicly disclosed security flaws in software – the kind that security teams track and prioritise for patching. Helm charts are the standard packaging format for Kubernetes applications.
The human-in-control model is deliberate throughout – Liz proposes, the operator decides, Liz executes. Smails is explicit that SUSE is not "flying out there on the cutting edge trying to convince customers we're AI." The goal is a more usable platform, with AI as the mechanism rather than the message.
What is new – and what SUSE considers its most distinctive move – is the extension of that ecosystem to third parties via Model Context Protocol (MCP). Organizations can now expose their own agents to Liz, connecting internal systems such as ticketing platforms or custom tooling without writing integration code. An operator can ask questions through Rancher about their specific environment, drawing on data from external tools as naturally as from SUSE's own agents. The bring-your-own-model option, announced previously, adds another dimension to this openness. In the first release, Liz ships with three MCP servers – Rancher, Fleet (SUSE's GitOps tooling, which manages infrastructure configuration through code repositories), and a cluster provisioner – with more announced in the weeks following KubeCon, including Linux server management and SAP integration.
The extensibility is what we find most unique. This is not an add-on product. This is all just about enhancing the user experience and making it that much simpler.
Seeing Liz in action at the SUSE stand makes the cognitive load argument concrete. A broken application instance (pod) in a namespace surfaces a problem; clicking "Ask Liz" sends the context automatically, Liz diagnoses the issue and proposes a fix, and the operator confirms or declines. The system prompt instructs Liz not to hallucinate, to use real Kubernetes API data, and to always validate before acting – human confirmation is on by default for any create or modify action, though it can be turned off. At SUSECon next month, SUSE plans to demonstrate Liz scanning for CVE-affected deployments, cross-referencing Application Collection for hardened equivalents, and swapping them out automatically. The observability integration goes further still: SUSE's observability tooling dynamically maps network traffic pod to pod, logs every Kubernetes change, and surfaces topology views that Liz can then interrogate directly to trace root cause – without the operator needing to hunt through logs manually.
Virtualization – the modernization opportunity
SUSE Virtualization – built on Harvester, SUSE's open source hyperconverged infrastructure project – is described as a modern alternative for enterprises navigating the disruption that has followed recent consolidation in the virtualisation market. Smails explains what genuine unification means, and what it does not:
You're still living in two different worlds. That's a packaging exercise, not unification. That's legacy Virtual Machines (VMs) in their New World Order.
SUSE's argument is that running virtual machines and containers on a shared infrastructure layer, managed through the same tooling, is different from bundling disparate products together under new pricing. SUSE Virtualization is not attempting feature-for-feature parity with legacy virtualisation platforms. It is aiming at the customers who want to modernize rather than replicate, and for whom the total cost of the SUSE stack is materially lower than what they are currently paying.
Every quarter that goes by, we are a stronger and stronger modern replacement. We're not trying to do like for like.
Two significant capability additions shipped. NVIDIA Multi-Instance GPU (MIG) support is now native to SUSE Virtualization, enabling enterprise-grade GPU partitioning for AI workloads without additional configuration. VM Auto Balance adds automated workload distribution across hosts – a core operational capability that enterprise customers have long expected. Live Storage Migration, allowing data movement without downtime, also arrives in this release, along with granular upgrade controls that give operators the ability to pause, step, and manage the upgrade process at their own pace rather than committing to a single operation across a complex infrastructure layer.
Smails describes SUSE Virtualization as "growing like a weed" – and is candid that the company is running to keep pace with customer demand: "victims of our own success a little bit." The signal he looks for over the next year is not revenue from virtualization directly, but the volume of marquee enterprise names publicly willing to be cited as modernizing on the SUSE stack.
Developer access and the GPU density problem
The third thread is about pulling developers into a cloud native ecosystem that has historically been designed for operators. Rancher Developer Access, introduced last year, bridged Rancher Desktop – a product-led growth tool with hundreds of thousands of community users – with Application Collection, SUSE's curated catalog of hardened, enterprise-ready container images.
During the event, SUSE opened up a meaningful portion of Application Collection to free-tier access. Base container images, alongside popular developer applications including Postgres and Redis, are now available without a commercial subscription (registration required). The logic is straightforward: lower the barrier to adoption, grow the registered user base, and convert a percentage into paid Application Collection customers over time. Smails describes it as a marriage of product-led growth and commercial software, with secure and governed adoption as the underlying goal.
For organizations running multiple teams or projects on shared hardware, the combination of Virtual Clusters and MIG support addresses a real resource contention problem. Virtual Clusters allow operators to carve isolated, self-service Kubernetes control planes from a single physical cluster – each fully firewalled, so teams cannot interfere with each other's workloads. With NVIDIA Multi-Instance GPU (MIG) support now added, a single physical GPU can be partitioned and allocated to individual clusters rather than shared across all of them. The practical effect is better hardware utilization and genuine isolation – teams get their own sandboxed environments for AI experimentation without competing for the same GPU resources.
My take
SUSE occupies an interesting position in the enterprise open source market. Its reputation in the practitioner community – the engineers and platform teams who actually live inside the infrastructure – has a genuinely strong history, built over years of consistent participation in open source governance and a track record of not abandoning the community when commercial pressures arrived. That trust is hard to manufacture, and recent market consolidation has made it considerably more valuable.
It's a refreshing change to see such a measured AI stance. A lot of vendors at KubeCon Europe were using AI language freely (understandably). SUSE's insistence that Liz is about usability, not about being an AI company, is more defensible – and one that people who have been burned by AI hype may find easier to act on. If enterprises can connect their own tooling to Liz using MCP extensibility without custom integration work, the value of the Rancher Prime platform has the chance to grow with every connection made.
Smails is clear about what he will not be measuring when it comes to successes:
The metric we'll be tracking won't be, oh my goodness, we introduced AI and prime revenue is doubled.
Retention rates, suite adoption, and customer stories are what he is monitoring instead. Infrastructure tooling that actually reduces operational burden rarely makes headlines for doing so – it just makes the team's week slightly less fraught, one CVE remediation at a time.