Building bridges, not burning maintainers - how the CRA is reshaping open source relations
- Summary:
- Europe's Cyber Resilience Act is causing disruption in enterprise-open source relationships. A conversation with Chris Robinson from the Linux Foundation shines a light on the challenges and opportunities.
When Europe's new Cyber Resilience Act (CRA) comes into force, manufacturers will face a challenging but necessary deadline – 24 hours to issue an initial security statement, 72 hours to produce a full advisory with patches. For commercial manufacturers with dedicated security teams, this timeline pushes them toward best practices long overdue in the industry. For the 16 million single-maintainer open source projects that underpin modern software, it highlights a fundamental support gap that the ecosystem needs to address.
That fundamental challenge between regulatory timelines and open source capacity was a key discussion point at the Linux Foundation's Open Source Summit in Amsterdam this week, where I met with Chris Robinson, Chief Security Architect for the Open Source Software Foundation (OpenSSF), to talk about bridging the implementation gap as the regulation evolves.
"Broadly upstream, [open source maintainers] will not be able to handle those requirements," says Robinson, known in the community as "CRob."
He continues:
Most commercial manufacturers will have a product security team, a security incident response team, and the mechanisms. But upstream maintainers aren't really prepared for a lot of these monitoring and reporting requirements that are coming down the CRA.
The divide is stark and quantifiable. Analysis by Robinson's colleagues Anchore reveals the true scale of open source security preparation – dozens of projects have formal security teams with up to 10 people and private testing infrastructure. Hundreds more have two to four maintainers with basic vulnerability reporting mechanisms. But the overwhelming majority – 16 million projects – are maintained by a single person.
The problem is compounded by choice overload. Of the nearly 500,000 projects in Maven Central alone, 85% are inactive with fewer than 500 monthly downloads. Yet developers must navigate this chaos while making split-second decisions about which dependencies to trust. Robinson notes:
The scales are very much weighted. There are more unaffiliated single maintainer projects than there are those that actually have security programs.
When deadlines meet dependencies
The regulatory timeline creates a particular pressure point. When manufacturers face potential penalties of billions of euros for non-compliance, the temptation to push that pressure upstream becomes almost inevitable. In open source terminology, "upstream" refers to the original project maintainers who create the code, while "downstream" means the companies that package and sell products using that code.
Yet recent analysis of Maven Central – the largest repository where Java developers download code libraries – reveals a troubling reality: 96% of known-vulnerable downloads were avoidable. From 37.8 billion monthly downloads, 3.97 billion involved vulnerable components, often when secure alternatives were readily available.
This isn't primarily a technical problem. It's a decision-making problem that regulatory pressure threatens to make worse rather than better.
"What I fear is that downstream is going to start harassing my upstream friends," Robinson says, referencing Daniel Stenberg's keynote about curl – a library embedded in countless connected devices despite having only one full-time maintainer. For example:
People sending them letters demanding, where is this patch? And a lot of times the maintainer might not even have the mechanism to understand that there was a problem yet.
For enterprises, this reality demands a fundamental shift in thinking. Commercial software contains between 80% and 97% open source components – essentially, free code libraries that developers use as building blocks – depending on which study you read. The scale of dependency management is staggering – the average Java application alone carries 148 of these code dependencies, generating approximately 1,500 dependency changes per year that teams have to track and evaluate. Yet most organizations have no systematic approach to understanding, let alone supporting, the volunteer programmers whose work sustains their operations.
Robinson emphasizes an important point:
What I need the C-suite to know is they need to understand what components they're using, and they need to do this risk-based analysis.
If you're a bank, the online banking portal is probably one of your key assets. You need to understand everything that lives there.
The practical advice is straightforward but requires sustained commitment: identify critical assets, map their open source dependencies, and where libraries appear repeatedly – OpenSSL is the classic example – consider funding, contributing code, or providing infrastructure support to sustain them.
The SBOM as a starting point
When executives ask where to focus their security investments, Robinson points to software bills of materials (SBOMs) as the most effective starting point. Think of an SBOM as an ingredient list for software — it tells you exactly which open source components are baked into your applications. He observes:
Where people can get the most bang for their buck is focusing on getting accurate and clear SBOMs to give you that whole dependency tree.
The business case is compelling. Analysis shows that teams using better security data combined with optimal upgrade decisions save 1.5 months per application per year — time currently wasted on reactive firefighting and redundant dependency management.
The log4j vulnerability perfectly illustrates why. The library "was not on anyone's radar because it's embedded so far down within commercial offerings or even other open source frameworks." Without comprehensive dependency mapping, the next supply chain attack could remain hidden until it detonates across downstream systems.
Yet even here, current regulations lag behind technical reality. The CRA requires disclosure of only first-level dependencies, ignoring the layered nature of modern software development. Robinson explains:
You'll bring in an SDK [Software Development Kit] or a framework, and that'll bring in more dependencies. Dependencies are very layered and nuanced.
New personas, new risks
If open source security is about scale and accountability, the parallel challenge of securing Artificial Intelligence (AI) systems is about unfamiliar participants and novel attack vectors. The rise of AI development has introduced what Robinson calls "a whole new set of constituents that are participating in application development, that aren't trained developers."
"They absolutely don't have cybersecurity," he notes. These business users, analysts, and end customers are directly interacting with AI models through simple chat interfaces, often without understanding the security implications of their inputs or the training data that shapes AI responses.
The security challenges blend traditional cybersecurity requirements — access controls, logging, monitoring — with AI-specific risks around training data integrity and bias prevention. Think of it like adding new rooms to a house. You need the same fundamental security (locks, alarms) but also new types of protection for the new spaces. But AI vendors aren't making integration easier. He continues:
They so aggressively avoid security. They're trying to make their own version of a vulnerability identifier, their own version of x and y. We have frameworks that have existed for decades... AI is just slightly different.
Robinson made an interesting analogy about poorly implemented AI security:
I used to teach CISSP boot camps, and I'd talk about security controls. I had a picture of a compound, where they had a wall, a gate and barrier arm, but no fence. You could see tire tracks where people would drive around it. That's exactly what people are doing — working around the system and avoiding the guardrails.
Looking toward 2027, when the CRA takes full effect, Robinson predicts initial alignment within the European Union:
I think we'll get to a spot, probably end of 2026-27, that will have at least the European laws generally working in the same direction.
But the global picture is more concerning. Other countries are already drafting CRA-inspired legislation with subtle but significant differences. Robinson warns:
One country might, instead of calling it a vulnerability, call it a weakness — and that causes fragmentation.
When the same concept has different names in different countries, companies selling internationally face a compliance nightmare.
The precedent of GDPR offers both hope and caution. Despite initial resistance, Europe's privacy regulation shaped global practice and influenced national laws worldwide. Robinson hopes the CRA will follow a similar trajectory — but only if governments resist the temptation to create incompatible variations for political reasons.
Beyond compliance
There's a dual challenge for enterprises. Operationally, open source security demands active engagement rather than passive consumption. Strategically, the integration of AI into core workflows requires security teams to work with new personas and understand unfamiliar risk patterns.
Both challenges share a common thread — accountability cannot be outsourced. Enterprises cannot treat open source maintainers as unpaid contractors, nor can they rely on AI vendors that deliberately circumvent established security frameworks.
The most resilient strategies combine comprehensive visibility — through detailed SBOMs and dependency mapping — with community investment and cross-team collaboration. Organizations that treat open source projects and AI pipelines as integral parts of their supply chain, rather than external conveniences, will be better positioned for both regulatory scrutiny and real-world attacks.
The CRA represents a regulatory wake-up call to supply chain realities that security professionals have understood for years. As Robinson observes:
The elements that are in the CRA are requirements that I had back at the bank [I worked at] 25 years ago. Aside from SBOMs, these ideas aren't new.
What is new is the legal codification of these practices and the timeline pressure it creates. The question facing enterprises isn't whether to engage with open source security — it's whether to do so proactively, through community investment and systematic risk management, or reactively, when the 72-hour clock starts ticking and the patches aren't ready.
My take
The real solution isn't faster patches; it's better relationships. Enterprises have spent decades treating open source as free infrastructure rather than community collaboration. The regulatory wake-up call is long overdue, but the response needs to be strategic investment, not panic-driven demands.
Robinson's point about the "tire tracks around the fence" perfectly captures the current state of AI security. We're seeing the same pattern that plagued early cloud adoption — vendors prioritizing speed over security, customers buying the marketing, and everyone hoping someone else will clean up the mess later.
The most successful enterprises over the next three years will be those that recognize open source maintainers and AI model creators as strategic partners, not service providers. That means real funding, real engineering support, and real participation in governance — not just angry letters when things break.
The alternative is a fragmented ecosystem where critical infrastructure degrades under regulatory pressure while AI security becomes a compliance theater. Neither outcome serves anyone's interests, including the citizens these regulations are meant to protect.