Main content

SUSECON 26 – beware of Gremlins as Europe’s open source agitator turns its attention to agentic AI


By Chris Middleton April 22, 2026

Your browser doesn’t support HTML5 audio

Dyslexia mode
Excerpt:
Agentic AIs are on everyone’s lips. But are they your tools, or your partners? And what if they turn into a Gremlin after midnight? Either way, management and observability are the keys


Inevitably, AI has had a significant presence at SUSECON 26 in Prague this week, if often tied to constant discussions about digital and national sovereignty, resilience, and choice.

SUSE made several announcements at the event. For example, day one saw the launch of SUSE AI Factory with NVIDIA. Described as “a turnkey digital factory producing enterprise-grade AI capabilities”, it gives organizations the ability to use NVIDIA’s latest AI hardware while keeping sensitive logic and proprietary data protected within their own private infrastructures – so, there is that ‘choice, sovereignty, and resilience’ theme again.

On day two of the main conference, SUSE has announced a new milestone in its partnership with data center giant Switch, to accelerate the latter’s digital twin initiative and operationalize next-generation AI factories.

Via SUSE AI, built on SUSE Rancher Prime, SUSE Linux Enterprise Server, and NVIDIA’s Omniverse libraries, Switch can now deliver highly accurate digital twins of its own data centers.

For Switch, this means the ability to simulate power usage, thermal dynamics and infrastructure performance at scale. These have all been key subtexts of the event, as concerns rise over AI’s energy, carbon, and water impacts, particularly among local communities.

Rhys Oxenham is General Manager of AI for SUSE. He explains:

In the race to scale AI, organizations shouldn't have to choose between cutting-edge innovation and operational stability. What we’re enabling with Switch is the shift from experimentation to execution, where AI, simulation, and real-time rendering run side-by-side on the same infrastructure. By providing a resilient, open-source foundation, SUSE gives leaders the flexibility to integrate best-in-class technologies, like NVIDIA AI Enterprise and accelerated computing, but on their own terms. We are providing the digital ‘floor’ that ensures these massive AI workloads remain secure, manageable, and always available.

So, those core themes are in evidence yet again.

SUSE has also announced the availability of SUSE Industrial Edge, a purpose-built Industrial Internet of Things (IIoT) platform, unifying operational data and enterprise intelligence across industrial environments. This is the fruit of SUSE’s recent acquisition of Losant, the industrial enterprise IoT platform.

While SUSE is a dominant force in the Near Edge (telco) and Far Edge (everything from hospital monitors to engines), it acknowledges that it was missing a compelling offering at the so-called Tiny Edge in industry – the critical layer of sensors and constrained devices where foundational data is born.

So, this plays into the AI message too, of course, with AI at the Edge, and from the Edge, being important elements in the mix, along with the management technologies that support them.

Roundtable debate

But the main second-day announcement was that SUSE is partnering with AWS, Fsas Technologies, n8n, Revenium, Stacklok, and other infrastructure players to use Model Context Protocol (MCP) to bridge AI agents within the enterprise, delivering IT management capabilities that are both secure and, via those agents, autonomous.

In other words, SUSE is providing a secure way for AI agents to monitor, troubleshoot and optimize infrastructure across any Linux or Kubernetes distribution. While enterprises are rapidly adopting agentic AI, such agents can often lack a secure, standardized way to interact with low-level infrastructure, such as servers and clusters, says SUSE.

The open source company sees MCP as the solution to this problem. Rick Spencer is General Manager of Engineering at SUSE. He explains:

Customers are under tremendous pressure to drive efficiency through AI. Agentic AI is the path forward, but until now, the industry has lacked a way to manage these agents at the infrastructure layer.

Both Revenium AI and Stacklok were later present at a roundtable on agentic AI, in the persons of Daithi Walsh, Head of Product Management, AI and Innovation, at Revenium, and Craig McLuckie, founder and CEO of enterprise MCP platform, Stacklok. Alongside them were SUSE’s Spencer, plus Abhinav Puri, General Manager of Portfolio and Community, and Christine Puccio, VP of Strategic Partner Business Development.

Puri kicked off proceedings with some strategic context for the past three years of AI innovation, which – by chance – have coincided with Dirk-Peter ‘DP’ van Leeuwen taking the helm as CEO of SUSE, thus presenting him with a new world of challenges.  Puri said:

For the past three years, the enterprise AI world has been one giant science experiment. Each one of us has played around with some chatbot or other. We are all using AI features in different software, and now we are all playing with AI agents too, one way or the other. We see this happening, but when it comes to software infrastructure, the space that enterprises work in today is passive, because it was built to host workers, and it was built to be managed, but it wasn't really built to think and to act. So, from SUSE’s lens, we are now on a journey of making infrastructure agentic.

What that means is we are working on ensuring that the foundational technologies that most global enterprises use become a digital coworker to their platform teams and IT teams, effectively moving to a state where we have self-healing systems that can perceive any failure, reason a solution, and act.

Then he added:

But none of these AI agents that are going to automate complex, autonomous operations and infrastructure management can work in silos. They need an ecosystem. […] So, SUSE’s play here is to build our intelligences – the ones we have – with SUSE technologies, and wrap that in up in MCP servers, which we can integrate with our partners.

A good summary. SUSE’s engineering leader Spencer picked up the narrative:

SUSE was kind of the first to embrace MCP servers at the OS level, when we announced SLES 16 [SUSE Linux Enterprise Server]. At the time, that was pretty groundbreaking.  We were super excited about MCP because of the capabilities it could provide around Natural Language Processing. We excited because we were already thinking about agentic capabilities. But then it turned out that the whole agentic industry embraced MCP as this universal standard, so it provided wide compatibility and capability throughout the industry.

So, so if we accept all that, what does SUSE bring to the table now? He said:

The management tools and the MCP servers for those management tools. But the engineering team that built those MCP servers has imbued them with their engineering intelligence about how to manage servers, right? So having the MCP server is a critical aspect of being able to agentically manage your infrastructure. But you need more, right? You also need a model. You need a place to write, manage, and orchestrate your agents, and you need governance around those agents. And so that's really what our ecosystem announcement is about. And we are working with some pretty big players that have full-blown agent platforms, like Fujitsu, Google, AWS, Microsoft, and Oracle.

Partnering 

Indeed. For all SUSECON 26’s talk about choice and resilience, and about digital (and national) sovereignty, once the politics subsides, SUSE remains a global company. That means it needs major partners, especially the very US giants who have, implicitly, been the focus of enterprise worries about the future in this unstable world.

On that point, SUSE’s Puccio said:

When we started looking at creating workflows with our MCP server, with MLM [Multi-Linux Manager] and with [SUSE’s agentic AI] Liz, we solicited [sic] this out to quite a few partners. And we had an overwhelming response from people who wanted to work with us, especially because our Open Source space provided choice. One of the demos that we have is with Oracle, illustrating how they tap into MCP and how they can manage different versions of Oracle Linux using our MLM server. But what’s super interesting here is that we also have Stacklok and Revenium. They are in very different parts of the spectrum.

Revenium’s Walsh explained:

We're probably the AI economic control agent for that. AI agents, they're like the intern with the credit card. They're spending wildly, but there's a difficult kind of pressure. If you look at token attribution, it doesn't tell you much about the outcomes that are being driven by that agent. So, that's the problem that we're looking to solve. In the context of SUSE’s Liz, for example, if Liz is making an orchestration decision around deployments and those containers are effectively AI workloads, then can you inject financial intelligence into that agent through MCP?

tacklok’s McLuckie chipped in:

One of the things that's lovely about what's being done here is that, as models grow more capable, being able to consume a variety of models in a good context, in an entirely decoupled way, is very powerful.  I think that SUSE, as a currently [sic] open source company, is really bringing that vendor-agnostic, cloud-agnostic, model-agnostic infrastructure – and that investment, which is great in terms of my part of the journey.

I think MCP is a similar technology. When I first saw it, it really spoke to me at two levels. One is it's giving us a view on what the world of AI-enabled applications looks like. […] But it's also critical from a governance perspective. What MCP does is offer you a boundary between these AI stochastic worlds and the world of traditional systems. It gives you an important control aperture where you can start to reason about how information is flowing across that boundary.

So, what does Stacklok bring to the table? He explained:

We are applying familiar tools and patterns to help enterprises run the Model Context Protocol and better observe and govern their AI agents. Collaborating with SUSE brings obvious value, and our registry of vetted MCP servers includes SUSE Multi-Linux Manager that enterprises are using to access relevant tools as part of their own agentic workflows.

But he warned:

As enterprises are building agents, those agents are phenomenally capable, but they're also dangerous, and they're only getting more capable and more dangerous over time.

As to what does he means by that, his response:

An agent is like a dog. You take the dog to the park and let it out among all the other dogs. Everyone's happy, but then your dog bites another dog. So, who's responsible? It's not the dog, YOU are. And this is the truth of these systems. Another analogy I like [from] Gremlins [the movie]. There are a lot of different ways in which a benign agent can become terrifying, but not all of it is obvious to someone who's not operating in this space. At the same time, they're fantastically useful, powerful technologies. So, we need to figure out how to bring them into our environments and constrain their actions such that they can perform good work for us without necessarily destroying the house.

My take

That being so, how much longer can the AI industry itself abdicate all responsibility in this way? For example, AI vendors routinely say that if an AI agent breaks your business, it’s your fault for deploying it. If it surfaces copyrighted material in response to a prompt, that’s your fault for prompting it, not the vendor’s for scraping unlicensed content in the first place. And recently it was reported that Anthropic’s Claude Constitution anthropomorphized to such an extent that its maker appeared to suggest that everything it ‘thinks’, ‘feels’, does, or says from this point on is no longer Anthropic’s problem or responsibility.

The overall message? You’re on your own, and everything is your problem now.

Now step back and look at that picture: how sustainable is it in practice, especially when the likes of OpenAI are suggesting a move to outcome-based pricing – AKA a cut of whatever benefits or insights their AIs reveal. On the face of it, this seems like a completely one-way relationship in financial and liability terms: you keep 100% of the risk, while an AI vendor just walks away smiling, after monetising other people’s IP.

SUSE’s Puri described the post-GPT AI Spring as a giant science experiment, but isn’t it also a giant theft experiment – a test to see how much the industry can get away with before the legal system catches up? (AKA, do you feel McLuckie, punk? Apologies...) McLuckie responded:

That’s a very interesting question. At the end of the day, two entities need to exist. There's the entity which takes input and turns it into inference and data, and I think that the AI companies want to be in that business, right? That’s the business of, 'You give us your data, and we give you inference results'. In that business, we [the industry] will be responsible for making sure that the content that's produced is not toxic, is not dangerous, that it meets specifications, and is not leading people to do peculiar things. And I think there will certainly be some liability with respect to the AI companies who are responsible for an agent’s output. If that agent’s output contains dangerous material, deceptive material, subversive material, and all these other problematic things, they ultimately will be held responsible. I don't think there's any way around that.

A useful perspective if you are using AI as a doctor or therapist for example, rather than an autonomous business agent. He continued:

But there's another set of accountabilities here, which is you are integrating, you're making decisions as an enterprise to bring this capability into your home, as it were. And if someone accidentally injects a malicious prompt into the content, you are going to be responsible for that. And I don't think it's practical to expect AI providers to have to carry that liability. All of which brings me back to MCP, which you can think of as a selectively permeable membrane that you wrap around your existing systems. It separates them from AI systems, and makes sure that value flows in both directions, and bad things don’t pass through.

Fascinating points. But what if you didn’t give the AI vendor your data in the first place, and they simply took it? That’s a question for another day, perhaps. But McLuckie did say:

The other thing that's worth recognising is that these are sharp tools that are getting incrementally sharper every day. Some of the models you can play with today are terrifying, right? I don't know whether the Mythos announcement from Anthropic is one of the best pieces of marketing we've ever seen, or if there's real veracity to that. Having worked with the Anthropic people a little bit, I suspect there's some veracity to it, and that tends to suggest where the world is going.

And I think that developers and engineers need to recognize that you will be held responsible. YOU will be held responsible for the agents’ behaviour while they're working. And if they suddenly turn into a Gremlin after midnight, then you better hope that you’ve built a Gremlin-proof cage.

Disqus Comments Loading...