GTC 2026 - Everpure tackles data readiness and flexible consumption for enterprise AI
- Summary:
-
Everpure uses its first appearance since rebranding to target two problems its own customers and the diginomica network consistently flag as the real blockers to enterprise AI progress - data readiness and storage consumption flexibility - with the DataStream beta and the extension of Evergreen//One to FlashBlade//EXA.
When Pure Storage rebranded as Everpure last month, I said that the move was in line with the company’s opportunity - whereby I argued that keeping ‘storage’ front and center of the vendor’s go to market strategy would feel dated compared to its current capabilities. I also said that it wouldn’t be meeting buyers entirely where they currently are, with regard to the challenges they are facing. The market has redefined itself around data management and AI readiness - and Everpure has arrived at GTC this week with a set of announcements that, taken together, begin to make a coherent product argument for the new identity.
There are three things being announced this week, and it's worth taking them in turn rather than bundling them together.
The first is a set of performance benchmarks for FlashBlade//S500 and FlashBlade//EXA - Everpure's enterprise and large-scale AI storage platforms. On the SPEC Storage AI_Image test, which measures how many simultaneous AI training jobs a storage system can sustain at full speed, FlashBlade//EXA ran 6,300 concurrently - the highest score ever recorded for that benchmark. On MLPerf 2.0, which tests how quickly storage can handle the checkpointing process during model training - essentially how fast a system can save progress without slowing the GPUs down - EXA came in at number one, more than 2x its nearest competitor on several sub-tests. FlashBlade//S500 posted 7.2 million IOPS on the IO500 benchmark, a standard measure of storage throughput for high-performance computing workloads. These are publicly submitted, independently verifiable results. But benchmark leadership at a show like GTC is expected - every infrastructure vendor arrives with numbers. It matters, but it's probably not the story buyers will walk away with.
The more important announcements are the other two - the beta availability of DataStream, Everpure's AI data platform, as well as the extension of its Evergreen//One consumption model to the EXA portfolio. Both address problems that are genuinely stalling enterprise AI projects right now, and both connect directly to what Everpure is trying to become post-rebrand.
The data readiness problem
Kaycee Lai, VP of AI, Everpure, explained to diginomica in a briefing ahead of the event, that organizations spend around 80 percent of their data teams' time getting data into a state where AI can actually use it. You can't point a model at raw operational data and expect results - this is a view that’s consistent with some upcoming independent research carried out by diginomica (more on that to come).
The data needs to be anonymized, indexed, vectorized, chunked, and embedded - and increasingly, each of those steps requires running AI models as part of the pipeline itself. The process is manual, complex, and slow, and it is one of the primary reasons AI projects stall between proof of concept and production.
DataStream, announced in October last year and moving into beta later in 2026, is Everpure's answer to that bottleneck. Everpure argues that you can connect DataStream to your data sources - S3 object storage, NFS file storage, databases, data warehouses, streaming data, enterprise applications - and it handles ingestion, curation, transformation, vectorization, and indexing. The aim is delivering AI-ready datasets directly to your models and applications.
Importantly, the source data doesn't need to originate from Everpure infrastructure. The platform exposes APIs covering pipeline orchestration, GPU integration, NVIDIA NIMs, semantic search, and LLM integration, which Everpure is positioning as the foundation for a partner ecosystem of AI-native application developers.
On the gap DataStream is trying to close, Lai said:
We wanted to do something above and beyond just basic storage-GPU integration. Our AI data platform - DataStream - is going to have the ability to solve one of the biggest challenges organizations face in AI.
Organizations spend about 80% of their data teams' time getting data to be AI-ready. You can't just point an LLM at your data and call it a day. You have to make sure it's properly anonymized, you have to strip out sensitive data, you have to make sure it's properly indexed and vectorized - for performance, accuracy, and governance. This is classic data prep for BI and analytics, but considerably more complex, because for AI it now involves multiple AI models that you actually have to use at each step of the pipeline.
This is where a lot of enterprise AI projects are actually getting stuck, and it connects to Everpure’s 1touch.io acquisition announced alongside the rebrand in February. Where DataStream handles the pipeline automation, 1touch.io brings the data discovery, AI-driven classification, and semantic knowledge graph layer on top - the intelligence that tells the platform what data exists, where it lives, and what business context surrounds it. The two pieces are designed to work together, and together they represent a more complete answer to the data readiness problem than either could offer alone.
The consumption model problem
Evergreen//One for FlashBlade//EXA addresses a different but equally real challenge. AI storage is notoriously difficult to size. The performance and capacity requirements shift significantly depending on where an organization sits in its AI journey - pre-training, post-training, and inference have different profiles, and most enterprises are running combinations of all three simultaneously, with workloads that evolve as models mature and use cases expand.
NVIDIA provides throughput guidance per GPU, which helps with performance sizing. But capacity requirements remain largely unknown until you're already in production. The result is that most organizations either over-provision on day one and absorb unnecessary cost, or under-provision and find their GPUs being throttled by storage that can't keep pace. Neither outcome is acceptable when GPU infrastructure is incredibly expensive from the off.
Evergreen//One for EXA aims to solve this via the same SLA-backed consumption model Everpure has applied to its core storage products for several years. Everpure sizes the deployment based on NVIDIA's standard and advanced tier guidance, guarantees the SLA, and the customer pays for consumption as actual requirements become clear - this means no large upfront capital commitment and no forklift upgrade when the workload shifts.
Matt Taylor, VP/GM of AI and HPC at Everpure, said:
As we've talked to customers - most of them, at least those early in their AI journey - they don't know how to size the performance and capacity elements of their storage…one of the reasons we've done this is that customers really needed a way to de-risk how they deploy their infrastructure.
My take
These announcements don't exist in a vacuum, and it's worth connecting them to what diginomica's own CIO network research has been telling us. Our November 2025 report, covering 35 CIOs and CTOs, found that 93% of organizations are using AI - but only 21.4% report success rates above 80%. The primary blockers were not compute access or model quality. They were data quality and disconnected data systems. Our January 2026 micro-pulse, 124 respondents, showed persistent frustration with the gap between proof of concept and production deployment. That is precisely what DataStream is designed to compress, and precisely the dynamic Evergreen//One for EXA is designed to de-risk.
It's also worth noting that 64.3% of CIOs in that same November research said they would prioritize hiring data scientists over buying an enterprise AI platform to close their AI readiness gap. On one hand, that suggests that enterprise buyers are not yet convinced that platform purchasing alone moves the needle - a healthy scepticism. On the other hand, DataStream's proposition is specifically that the platform can absorb the data engineering complexity that would otherwise require headcount. Whether buyers accept that argument is the real question.
I asked Lai about this and his response, when I pushed on whether having the right infrastructure is sufficient to drive AI adoption, was:
I don't think it's an 'if you build it, they will come' scenario. You can have the best platforms in the world and that doesn't guarantee AI adoption or AI success.
Going back to basics: don't think of AI as necessarily a new thing. What we see is that a lot of customers who succeed are solving traditional problems - they're just doing it a lot better with AI. A retail or manufacturing company solving real-time inventory visibility. A supply chain company solving predictive maintenance. These are problems people have been trying to solve for decades, but with AI it becomes a lot better, a lot easier, a lot faster.
So with anything, you have to understand what business impact you want to have with AI. If you understand that clearly, the next question is: what would the solution need to look like? And from there: what does the platform need to deliver? I actually think the platform comes towards the end. The first part is understanding what business problem you want to solve and what outcome you want - without that, you're not going to get there.
This sort of answer reflects a maturity in how Everpure is now thinking about its role. The rebrand was about moving up the stack. DataStream and Evergreen//One for EXA are where that movement becomes tangible product reality rather than brand narrative. Where Everpure needs to continue to focus is on the strategic change management conversations that enterprise buyers grappling with data for AI will want to be having. Selling hardware is a very different proposition to selling strategic software, in my experience - if Everpure can continue to tackle those conversations head on, these announcements will be received positively by the buyer community.