Dynatrace Perform 2026 - why do observability POCs succeed but enterprise rollouts stall?
- Summary:
- Storio Group and DXC Technology demonstrate that platform maturity isn't the bottleneck—organizational readiness is. Culture, business alignment, and forcing functions determine whether observability investments scale beyond initial proof points.
Two breakout sessions at Dynatrace Perform 2026 in Las Vegas demonstrated that while observability platforms have matured significantly, the organizational challenges of moving from proof-of-concept to production-scale adoption remain consistent across industries. Discussions with consumer photo-products vendor Storio Group and enterprise IT company DXC Technology revealed that culture, business alignment, and strategic tool consolidation determine whether observability investments deliver meaningful returns – technical implementation guidance proves secondary.
Culture as a starting point
Alex Hibbitt, Engineering Director, Customer Platform at Storio, opened with a blunt assessment of the company's post-merger observability posture. The European photo gifting company had just completed a complex merger, consolidating five different e-commerce platforms while dealing with 11 petabytes of customer images stored in S3. Their legacy observability approach presented a significant productivity drain: heavily log-centric with 1.1 million daily log messages flowing into OpenSearch, the system required half the engineering team to hunt through logs during any incident. Engineers spent hours correlating events across services, manually setting thresholds for alerts, and reconstructing transaction flows from fragmented log entries. The merged Anglo-Dutch engineering culture strongly resisted top-down directives, requiring a different approach to transformation. Engineers believed their log-centric methodology represented industry standard practice – they had no exposure to alternative observability paradigms.
Storio formed a small working group of influential engineers, tasking them with industry research and internal assessment. The result was a maturity model that made knowledge gaps visible through concrete comparison points. Engineers could see their organization at level one while competitors operated at levels three or four. Hibbitt described the shift:
Suddenly they were thinking, 'we're actually sat here in level one. We want to be all the way in level three, maybe sometimes in level four.' That was absolutely transformative in changing our culture around observability.
Todd Lee Loy, who heads DXC Technology’s AdvisoryX practice in the Americas, emphasized similar principles: POCs must prove value across multiple organizational layers simultaneously. Technical capability exists to trace transactions end-to-end within minutes rather than hours, but convincing teams to abandon familiar manual processes requires proof that directly addresses their daily pain points.
Data unification through intelligent signal selection
Once Storio migrated logs into Dynatrace, engineers immediately recognized opportunities to use better signal types for specific observability needs. The team had defaulted to log-based monitoring through path dependency – logs represented the only observability language they knew. This created significant inefficiency: generating metrics by counting log entries consumed unnecessary compute resources, while understanding request flow across 200-plus microservices required manual correlation of timestamps and transaction IDs.
The shift reduced daily log volume from 1.1 million to approximately 600,000 messages through intelligent signal selection rather than data loss. Storio moved performance counters and health checks from log entries to proper metrics, reducing both storage costs and query complexity. Request tracing replaced log-based transaction reconstruction, providing automatic correlation across service boundaries. The remaining logs focused on high-granularity debugging information that genuinely required textual detail. Erik Mittelhammer, Lead Solutions Engineer from Dynatrace, noted that this refinement process represents a common maturity progression – organizations initially over-rely on logs because they're familiar, then optimize toward the right signal for each use case as observability sophistication increases.
DXC's observability practice lead Tushar Patwardhan described similar patterns with a global banking client running 500-plus business processes across on-premises data centers and cloud environments. The bank's POC specifically tested whether Dynatrace could span both legacy technologies (old .NET and Java systems) and modern Kubernetes environments while providing business process visibility. Success required demonstrating transaction traceability across the entire stack – from customer-facing actions like opening bank accounts through every backend system touched, including mainframe interactions in some cases. The technical challenge wasn't instrumentation coverage but rather proving that unified observability could eliminate the correlation gap that forced teams into extended war room sessions during incidents.
Luc Burnip, Client Technical Leader from DXC highlighted practical outcomes in insurance operations processing $110 billion annually in claims. The organization records approximately 2,800 user sessions weekly, monitoring application digital experience per business product after each release. When problems emerge, a Microsoft Teams bot messages the release team within two seconds, linking directly to affected user sessions. This integration eliminated the translation layer where support teams relay problems to developers – engineers see actual user sessions showing 24-second delays on submit buttons and can identify root causes immediately. The technical implementation required custom webhooks and session replay API integration, but the organizational value came from collapsing feedback loops that previously took hours into seconds.
AI-driven thresholds at scale
Storio operates over 200 microservices – small by many standards, yet impossible to manage through manual thresholding. Each service generates dozens of potential metrics worth monitoring. Creating meaningful alert thresholds for each metric requires understanding normal behavior patterns, accounting for daily and weekly cyclicality, and distinguishing signal from noise. Manually configuring this across 200 services would require continuous engineering effort as application behavior evolves. The team recognized that scaling observability required automating threshold management entirely.
The shift to AI-driven thresholding proved critical during Black Friday, Storio's busiest trading day when individual hours regularly generate between one and 1.5 million euros in revenue. Dynatrace's hypermodal AI engine, Davis, identified backend services struggling with network connection limits at the container level – not CPU constraints that would have triggered traditional auto-scaling policies. Manual monitoring would have focused on CPU and memory metrics while missing the actual bottleneck. Davis identified the anomaly through baseline comparison across similar services, providing causal analysis pointing to network saturation. Engineers manually scaled up those services immediately, then rationalized network request patterns into auto-scaling policies within the same hour. Hibbitt quantified the impact:
We protected against three different outages. I think that would have been around about 4.5 million in revenue that we would have lost.
The Black Friday validation shifted team confidence. Engineers recognized that their mental models of normal behavior couldn't account for complex interactions across hundreds of services under load conditions that only occur during peak periods.
Business process visibility over infrastructure metrics
DXC's implementation philosophy centers on business outcomes rather than technical metrics. Healthcare implementations track patient admission through emergency department triage, correlating system performance with operational capacity – allowing modeling of hypothetical scenarios like additional staffing impact on throughput. The technical implementation requires tagging transactions with business context through request attributes, then aggregating across technical and operational dimensions simultaneously.
This approach provides IT with leverage when requesting observability investments – framing proposals around business process visibility rather than infrastructure monitoring.
Storio's experience validates this approach. The company processes 15 million orders annually across a dozen countries during a compressed peak season. Storio's observability transformation included instrumenting business process flows – order placement, payment processing, production scheduling – as first-class monitored entities alongside technical infrastructure.
Scaling through deliberate organizational design
POC success requires deliberate scaling strategies to achieve enterprise adoption. DXC's Patwardhan described implementing a pod model for different business units with varying velocity requirements. Each business unit creates its own pod based on application portfolio size and release cadence. A unit with 50 applications requiring weekly releases receives different pod capacity than a unit with five applications releasing quarterly. This design maintains standardization and governance while accommodating different operational tempos. The technical implementation involves tenant isolation within the shared platform, capacity allocation based on usage metrics, and chargeback models that create accountability for observability costs. Business units that instrument everything pay more than units that selectively instrument critical paths – creating natural economic incentives for thoughtful observability design.
Storio faced similar scaling challenges after initial POC success. Engineers remained skeptical until they experienced value during high-stakes scenarios. The Black Friday period provided the forcing function that changed team mindset. Hibbitt reflected:
It wasn't until we went through the period where they saw the value through this particular Black Friday window that they went, 'Actually, you might have a point here.' We needed a deadline, a major time-bound event that was going to force us to take action.
Organizations require both proof points and forcing functions to drive broad adoption. Technical capabilities demonstrated in controlled POC environments don't automatically translate to changed behaviors under operational pressure.
Storio's journey produced quantifiable results across multiple dimensions. Mean time to resolution dropped from hours to minutes through automated correlation and AI-driven root cause analysis. During peak trading periods, the platform identified and prevented three separate outages that would have generated €4.5 million in revenue loss based on historical incident impact analysis.
DXC's insurance client now releases approximately 1,500 deployments weekly across development and testing environments, consolidating into around 85 production releases across 12-15 business products. Burnip's team optimized query costs from 500 gigabytes to 25 megabytes through OpenPipeline preprocessing and strategic bucket allocation – a 95% reduction that enabled investment in additional platform capabilities including security analytics and business process monitoring without expanding overall observability budgets.
These breakout sessions complement broader themes that came out during Dynatrace Perform 2026 – where executive discussions outlined observability's strategic evolution while customer implementations revealed the organizational friction points that determine whether those strategies succeed at scale.