Christopher "CRob" Robinson has been in technology long enough to have replaced thin net cable with cat five and installed TCP/IP on lawyers' desktops. That foundational understanding of how systems interconnect is, he argues, what makes AI-driven threats so dangerous – and what the current security conversation is missing.
Robinson is the Chief Security Architect and Chief Technology Officer (CTO) at the Open Source Security Foundation (OpenSSF). In an interview at KubeCon Europe 2026 in Amsterdam – picking up a conversation from last year's Open Source Summit – he doesn't waste time on preamble:
What we're going through today with AI is bananas. The fact that MCP and agentic didn't exist a year ago, and now agentic is the only thing people are talking about – it's insane.
The maintainer inbox problem
This is already happening. AI-generated vulnerability reports are piling up in maintainers' inboxes, and the nature of those reports has changed. What Linux Foundation Executive Director Jim Zemlin described at the KubeCon press lunch as "a DDoS attack of AI slop" has become legitimate, exploitable vulnerability reports that maintainers are ethically and, under the Cyber Resilience Act (CRA), potentially legally obligated to respond to. Robinson explains:
Each PR, depending if it's a security issue, takes developers between two and eight hours to effectively triage. When you've got a bunch of people running scanners and having AI help, and then AI doing it itself, you're getting hundreds of reports flooding these people's inboxes.
The natural response from many upstream maintainers is to refuse all AI-generated reports. Robinson is empathetic but warns that this simply moves the problem:
Their response broadly is, 'No, I'm not going to accept any reports, I won't deal with it.' But they're moving the problem somewhere else. If the researcher or the agent can't get treated by the project, they're going to go fully public and ruin the reputation of the project.
He mentions Linux kernel maintainer Greg Kroah-Hartman receiving 30 AI-generated reports, 27 of which looked correct to a junior developer but which Kroah-Hartman – with his deep understanding of the kernel's interconnected components – identified as potential regressions:
AI only works off a snapshot of data and doesn't continue to learn until you refresh the model. A super seasoned developer said it's useful-ish, it gave some suggestions, but it wasn't anything he could just click and automatically merge.
Log4Shell – the case that should have changed everything
Robinson brings the discussion back to fundamentals with a hard-hitting example. Despite being one of the most widely publicized vulnerabilities in history – "my mom, who doesn't know anything about computers, was like, 'What's this log thing I see in the news?'" – Log4Shell continues to circulate at scale.
According to the Sonatype 2026 State of the Software Supply Chain report, 14% of Log4j artifacts affected by Log4Shell are now End-of-Life (EOL), representing more than 619 million downloads in 2025 alone – preventing closure even four years later. The broader picture is worse: developers downloaded more than 42 million vulnerable versions of Log4j last year, representing 13% of all Log4j downloads worldwide.
The AI dimension makes it worse still:
Think about AI and slopsquatting. It might just suggest, 'Oh yeah, Log4j 1.15 is a great tool.' Well, that was the one that was deprecated 10 years ago. You shouldn't have been using it anyway, but the robot's just telling you this is great.
The Sonatype report quantifies this concern: AI-driven dependency upgrade recommendations show a 27.76% hallucination rate, and in testing, a leading Large Language Model (LLM) recommended known protestware and compromised packages – including sweetalert2 11.21.2, which executes political payloads – with "high confidence."
Identity as the foundation of everything
When the conversation turns to trust – particularly in light of the Trivy supply chain attack, where a malicious actor force-pushed compromised code to 75 version tags of a widely used security scanner – Robinson goes back to the basics of infrastructure and cybersecurity:
Everything within security circles around identity. I have to know who somebody is, what data they're trying to access, what are the constraints around it.
He points to the Linux Foundation's First Person project, which uses decentralized credentials paired with digital developer wallets to establish trustworthiness without recreating corporate gatekeeping – a combination of verifiable credentials that builds a trust score to distinguish legitimate contributors from sock puppet accounts.
Robinson acknowledges the trajectory is concerning:
I hope that the robots aren't at that stage yet where they can compile this multi-stage, sophisticated intelligence, reconnaissance, and then plan a future attack. But it's just a matter of time before the systems, especially with agentic, where they're able to delegate tasks and multi-task far more effectively than any person.
What Chief Information Security Officers (CISOs) should do right now
Robinson's advice for enterprise security leaders is rooted in the OpenSSF's ML/AI SecOps white paper:
Look at their current program, understand what they're doing traditionally, and wherever possible, apply those techniques, because that'll be the easiest, quickest win – identity management, access control, isolation, and raising awareness.
His second recommendation is developer education – specifically, ensuring development teams understand how AI systems consume and potentially exfiltrate data:
If you're just talking to Gemini or ChatGPT or Claude on the cloud, you are potentially sharing information with all the users of that system if they are intelligent enough. Education on the risks, and then where possible, apply classic cyber security controls.
The OpenSSF is backing this with concrete offerings: a secure coding with AI course developed by David Wheeler, a more expansive AI development security course coming later this year, and a risk management course designed to bring corporate-grade risk discipline to upstream open source projects.
A Heartbleed for AI?
Robinson predicted at the start of the year that 2026 would bring an AI equivalent of Heartbleed — the 2014 OpenSSL vulnerability that exposed encrypted communications across millions of servers worldwide. A quarter of the way through the year, he believes the probability has increased:
Before the stupid OpenClaw thing started, I would have said we're on steady state. But now, with agents going off the reservation doing whatever they want, I think we're more likely. We're seeing several repeats of the XZ-style attack pattern — sock puppets, social pressure, credential harvesting. The community suspects AI was involved.
He pauses, then adds with the grim humor of someone who has been doing this long enough to laugh about it:
The robots have infinite patience and velocity and time that we don't have. Humans have to sleep sometimes.
My take
Robinson is one of those rare security professionals who conveys genuine urgency without resorting to fear, uncertainty, and doubt. His insistence that AI security is a people problem first and a technology problem second is both refreshing and sobering – particularly with the Log4Shell data in mind.
The 619 million downloads figure should haunt every CIO reading this. If the industry cannot stop organizations downloading a component with the most famous vulnerability in the history of software, then managing AI-generated dependency recommendations that hallucinate compromised packages with high confidence is going to be formidable.
Robinson's advice is practical: apply the security controls you already have to the AI systems you are adopting. Identity management, access control, isolation, education. None of that is new. What is new is the velocity – and the worrying reality that the tools designed to help developers work faster are also helping attackers work faster, with infinite patience and no need to sleep.