Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Making Data AI-ready: 13 Storage Vendors Bring Latest Tech To Nvidia GTC

CRN by CRN
March 18, 2026
Home News
Share on FacebookShare on Twitter


Access to data that has been prepared for use in AI training, inference and other tasks is key to ensuring that storage systems don’t act as bottlenecks to full performance of GPUs and AI applications. CRN looks at 13 storage vendors that brought their best AI offerings to Nvidia’s GTC conference.

GPUs are the devices on which AI runs, and the importance of increased GPU performance was a key theme of this year’s Nvidia GTC 2026 event.

But no matter how much performance those GPUs offer, they come to a standstill while waiting for data. That has elevated the availability and performance of data to the same level of importance as GPUs.

To ensure data does not serve as the bottleneck for GPUs and the AI applications running on them, storage vendors are approaching the AI industry on multiple fronts. These include increased performance of storage systems to feed data to the GPUs and improved management of data to ensure it is ready and in a form the GPUs can use.

[Related: Analysis: Nvidia’s AI Dominance Expands To Networking As It Makes Bigger CPU Push]

Several of the IT industry’s top data storage and data management companies were at Nvidia GTC 2026 showing their latest entries for AI. This included new storage systems from companies like Dell Technologies, NetApp, Everpure, Hammerspace, DDN, Vast Data and Hitachi Vantara, along with new management schemes from several other vendors.

CRN is highlighting new AI-focused storage technologies from 13 top storage vendors. Read on for all the details.


Lightbits Labs LightInferra

ScaleFlux, FarmGPU and Lightbits Labs took to GTC to unveil a collaborative architecture they said was designed to overcome the memory and I/O limitations of long-context AI inference. By integrating ScaleFlux’s high-performance NVMe storage, FarmGPU’s managed environment and Lightbits’ LightInferra software, this new platform optimizes KV-cache persistence and streaming. The companies said the architecture eliminates GPU stalls by prefetching essential data over high-speed RDMA, significantly reducing Time-to-First-Token and increasing GPU utilization by up to 3X. Featuring AI-native security and tenant isolation, the platform enables enterprises to serve larger models and longer conversations more efficiently, delivering a responsive, scalable and cost-effective infrastructure for next-generation AI workloads.


Hitachi Vantara Hitachi iQ

Hitachi iQ from Hitachi Vantara is an enterprise AI infrastructure platform that integrates compute, networking and storage into a validated infrastructure stack. Built on Hitachi Vantara’s VSP One data platform, Hitachi iQ now supports new Nvidia Blackwell and Blackwell Ultra GPUs across air-cooled and liquid-cooled configurations, targeting workloads ranging from model development and fine-tuning to inference and agentic applications. Hitachi iQ Studio, the AI software component, provides tools for deploying and managing AI agents within secure enterprise environments, including new blueprints for multi-agent orchestration with supervisor and worker roles. Its expanded capabilities with Hammerspace enable distributed data access via MCP, reducing the need to move data between environments.


Vdura Data Platform

At GTC 2026, Vdura unveiled three updates to its GPU-native AI storage platform: immediate availability of RDMA; a preview of Context-Aware Tiering; and optimized infrastructure built on AMD EPYC Turin processors and Nvidia ConnectX-7 networking. RDMA enables direct GPU-to-storage data transfers that bypass the CPU, reducing latency for AI training and inference workloads. Context-Aware Tiering, planned for later in 2026, will intelligently move data across storage tiers, including local NVMe SSD, DRAM and durable storage, based on workload patterns. RDMA is currently available on V5000 and V7000-class systems.


Everpure Evergreen//One for FlashBlade//EXA

Evergreen//One (EG1) for FlashBlade//EXA from Everpure, formerly known as Pure Storage, is a flexible, consumption-based storage service designed for large-scale AI training and inference. It replaces traditional hardware purchasing with a subscription model, aligning costs with usage and enabling seamless growth as AI workloads and data complexity evolve. Everpure said that as enterprises historically struggle with a “performance collapse” when scaling AI from a few nodes to hundreds, this service provides the high-throughput and linear scalability required for massive AI workloads from a single dashboard. By delivering a unified subscription for global deployment, Everpure aims to move the market from rigid hardware silos toward flexible, global AI factories that scale on demand.


DDN Horizon And AI Factory Experience

DDN Horizon from DDN is a full-stack AI orchestration control plane designed to operationalize GPU and data infrastructure as AI-as-a-Service platforms. The software orchestrates compute, storage and AI pipelines across on-premises, hybrid and cloud environments to help organizations provision AI workspaces, training environments and inference services through a self-service model. The company also showed its AI Factory Experience under which DDN, Supermicro and Nvidia collaborated on a fully operational, turnkey AI infrastructure stack. The environment, hosted inside a custom-built platform, runs live enterprise AI pipelines including retrieval-augmented generation, financial risk analytics, genomics workflows and video analytics on DDN Enterprise AI HyperPOD systems powered by Nidia GPUs.


Nutanix Agentic AI

Nutanix took to GTC to unveil its Agentic AI offering, a full software stack purpose-built to help customers accelerate adoption of Agentic AI for business transformation. The offering integrates with Nvidia AI Enterprise at the Agent Builder layer and orchestrates the Nvidia-certified ecosystem of AI factories. It aims to deliver performance, compliance and security for Agentic AI applications and help minimize aggregate token costs. It also empowers enterprise infrastructure and platform teams to simply build, scale and operate AI factories and empowers developer teams with a rich set of AI PaaS services integrated with Nvidia AI Enterprise to accelerate deployment of Agentic AI workloads, the company said.


Vast Data Vast Foundation Stacks

The new Vast Foundation Stacks from Vast Data are an open-source library that extends Nvidia Blueprints into production-ready AI pipeline implementations designed to run natively on the Vast AI Operating System. By packaging proven architectural patterns into deployable templates, Foundation Stacks helps organizations move from experimentation to production faster. Each stack integrates data access, database services, compute orchestration, eventing and pipeline execution within a unified environment to help eliminate the need to assemble complex infrastructure. This also helps developers focus on the business logic connecting AI to enterprise data and workflows while consistently deploying scalable pipelines across cloud and on-premises environments where the Vast AI OS runs.


Cloudian HyperScale AIDP Lenovo Validated Design

Cloudian’s HyperScale AI Data Platform received Lenovo Validated Design certification, giving enterprises a pretested, on-premises AI infrastructure stack built around Cloudian’s S3-native storage and Nvidia GPUs and Enterprise AI software. The technology offers data sovereignty so that regulated industries can run AI workloads, including customer service chatbots drawing on decades of institutional documents, or video monitoring for security and safety compliance, without sending sensitive data to public cloud providers. The validated configuration is built on Lenovo’s Hybrid AI 285 platform featuring the Lenovo ThinkSystem SR675 V3 server with up to eight Nvidia RTX Pro 6000 GPUs. It is available through Lenovo’s channel.


NetApp EF-Series Storage Systems: EF50, EF80

The next generation of NetApp EF-Series, the EF50 and EF80 models, was designed for organizations managing massive data volumes for performance-intensive workloads, according to NetApp. It supports businesses in meeting the growing demands of AI, high-performance computing (HPC), enterprise databases and neocloud environments. EF80 targets AI and HPC workloads at scale, while EF50 offers similar benefits for midsize organizations, optimizing AI and analytics, both with a cost-efficient footprint. These systems deliver over 110 GBps of read throughput and up to 57 GBps of write throughput, a 250 percent improvement over previous generations, to help ensure fast and reliable performance storage within an affordable budget.


MinIO AIStor For Nvidia STX

MinIO AIStor from MinIO now supports object data stores for Nvidia’s new STX reference architecture, bringing object-native storage at the center of enterprise AI factories. Designed to run natively on Nvidia BlueField-4 processors, AIStor supports the full AI life cycle, including large-scale training, enterprise RAG pipelines, and real-time agentic inference while delivering wire-speed performance and independent scaling for storage and compute. The integration aligns MinIO’s disaggregated storage architecture with Nvidia’s modular STX design for modern AI factory infrastructure deployments.


Hammerspace AI Data Platform

The Hammerspace AI Data Platform from Hammerspace is a global data environment that unifies an entire data estate into a single, high-performance namespace. Aligned with the Nvidia AI Data Platform reference design, it bridges the gap between where an organization’s data lives (legacy NAS, object storage, cloud) and where it is needed (AI applications run on GPUs on-premises or in the cloud). A unified namespace and workflow automation allows organizations to greatly reduce the number of tools and simplify their data pipelines. With rapid deployment and a scalable architecture, organizations can start small, move fast and grow with business demands.


Dell AI Platform With Nvidia

Dell brought a couple of new things to GTC for its data and storage business. Its new Data Orchestration Engine, built on Dell’s Dataloop acquisition, automates data discovery, labeling and transformation to help make data AI-ready. Nvidia RTX Pro Blackwell Server Edition GPUs embedded in the data layer deliver up to 12X faster vector indexing and 19X quicker time-to-first-token, the company said. Dell Lightning File System (pictured), available April 2026, delivers up to 20X greater performance versus flash-only scale-out competitors and 2X greater throughput per rack unit, the company said. Dell Exascale Storage combines file, object and parallel file system storage on a single platform with up to 6-TBps read performance and Nvidia ConnectX SuperNIC support.


Weka NeuralMesh AI Data Platform

Weka unveiled general availability of its enterprise-ready NeuralMesh AI Data Platform (AIDP), which the company said delivers composable, high-performance infrastructure optimized for AI Factory deployments. Based on Nvidia’s AI Data Platform reference design, the offering is an end-to-end system aiming to accelerate the delivery of AI-ready data to AI factories. This, the company said, results in AI project timelines speeding up from months to minutes, empowering organizations to deliver production-scale agentic AI applications using best-in-class technologies across their ecosystem. Leveraging NeuralMesh’s adaptive architecture, it addresses the most persistent obstacle in enterprise AI: Organizations can demonstrate AI concepts work in proof-of-concept but consistently struggle to reach production scale.



Source link

Tags: AIArtificial IntelligenceEdge ComputingFlash ArrayFlash Storage
CRN

CRN

Next Post
Here are the five key takeaways from this week’s Fed meeting

Here are the five key takeaways from this week's Fed meeting

Recommended.

PolarEdge Botnet Exploits Cisco and Other Flaws to Hijack ASUS, QNAP, and Synology Devices

PolarEdge Botnet Exploits Cisco and Other Flaws to Hijack ASUS, QNAP, and Synology Devices

February 27, 2025
AT&T to Accelerate Fiber Network Expansion Following Passage of the One Big Beautiful Bill Act

AT&T to Accelerate Fiber Network Expansion Following Passage of the One Big Beautiful Bill Act

July 3, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Huawei uvádí na trh řešení FAN nové generace

Huawei uvádí na trh řešení FAN nové generace

March 6, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio