NetApp announced the new EF50 and EF80 all-flash block storage arrays, the latest generation of its EF-Series, at the recent NVIDIA GTC 2026. The systems replace the previous EF-Series generation with a purpose-built design aimed at AI model training, high-performance computing simulations, and high-throughput transactional databases.
The EF-Series directly tackles the challenge of scaling storage for AI with high-throughput, low-latency block storage designed to run behind parallel file systems such as Lustre and BeeGFS (standard data-serving layers in large HPC and AI training clusters).
Technical Details
The EF50 and EF80 are purpose-built all-flash NVMe block storage arrays, differentiated by target workload profile. The EF80 is optimized for peak-throughput applications such as AI training and large-scale HPC simulations. The EF50 is geared towards mixed workload environments, including databases and secondary AI use cases such as inference and preprocessing. Both systems serve as backend storage for parallel file system deployments rather than as direct-attached or file-serving platforms.
Key specifications include:
- Throughput: The EF80 delivers over 110 GB/s read throughput and 55 GB/s write throughput per system, a 250 percent improvement over the prior generation.
- IOPS: NetApp claims up to 5 million IOPS per system and that rack-scale configurations can achieve over 100 million IOPS and 2.35 TB/s of read throughput, all while consuming under 37 kilowatts per rack.
- Capacity and density: Each 2U system provides up to 1.5 petabytes of raw storage, with a power efficiency of 63.7 GB/s per kilowatt.
- Latency: NetApp reports sub-millisecond latency for the EF50 in mixed-workload configurations. The blog post references p95/p99 tail latency as a design consideration, though specific values under load are not disclosed in the available materials.
- Parallel file system integration: Both systems are validated for deployment with Lustre and BeeGFS, supporting standard OSS/MDS node configurations with multipath I/O. This is the primary deployment pattern for AI and HPC environments.
- Observability: The systems expose performance telemetry covering IOPS, throughput, latency, and cache behavior. NetApp states this data can integrate with monitoring stacks, including Prometheus/Grafana via exporters and Splunk/ELK via syslog and API.
- Management and automation: EF50 and EF80 support REST API and Ansible-compatible provisioning workflows. NetApp provides non-disruptive lifecycle operations, including rolling firmware updates and online component replacement.
- Operational monitoring: Predictive health checks, call-home telemetry, and support bundle generation are built in to support proactive maintenance.
Analysis
Organizations running GPU-accelerated AI training or large-scale HPC simulations face a concrete operational problem: storage I/O limits GPU utilization, and idle GPUs are expensive.
The new EF50 and EF80 arrays address that issue by offering a high-throughput, low-latency backend for the parallel file system layer. For teams already using Lustre or BeeGFS, the integration process is well-established, with standard multipath setups and existing operational runbooks.
Several operational characteristics are important to highlight:
- GPU utilization improvement: The main benefit is sustained storage throughput that keeps GPU pipelines fed. NetApp frames this around removing storage-induced GPU stalls, though actual utilization gains depend on workload traits, cluster size, and tuning of the parallel file system.
- Deployment in existing environments: EF-Series integrates with current Lustre and BeeGFS setups as block storage for OSS nodes. Organizations already using these file systems can consider EF50/EF80 as a backend replacement or for capacity expansion without needing to redesign their storage architecture.
- Operational management: The built-in telemetry and REST API support reduce manual monitoring efforts and enable integration with standard infrastructure automation tools. Non-disruptive firmware updates are crucial for AI clusters that run continuously.
- Power envelope: NetApp promises 37 kilowatts per rack at rack-scale performance, which is a key constraint since power availability limits AI infrastructure expansion at many data center locations.
- Complexity and support: The EF-Series benefits from a well-established support ecosystem, and NetApp promotes these systems as simpler alternatives to purpose-built storage from HPC-specialist vendors. Organizations lacking extensive storage engineering teams may find this operational model practical.
Market Position
The NetApp EF50 and EF80 products extend NetApp’s AI infrastructure narrative beyond ONTAP-based file and object storage. The EF-Series occupies a distinct position in the NetApp portfolio, providing block-only, no ONTAP, no multi-protocol overhead, no NAND flash tiering.
This keeps the design focused on throughput and latency rather than feature breadth.
The timing of the announcement at NVIDIA GTC 2026 aligns it with the broader AI infrastructure investment cycle and places NetApp alongside other storage vendors making similar claims at the event.
Several points worth noting:
- Portfolio coherence: NetApp covers the entire AI data pipeline across its product lines. AFF and ONTAP-based systems manage enterprise data and support multi-protocol access. StorageGRID provides object storage for unstructured data at large scale. EF-Series manages the raw throughput for scratch and training data. This broad coverage is a real differentiator when customers look to unify storage vendors across the entire pipeline.
- Neocloud and sovereign AI targeting: NetApp’s launch collateral specifically identifies neocloud providers and sovereign AI cloud builders as target customers, alongside traditional enterprises. This indicates where new AI infrastructure investments are focused and where opportunities for competitive displacement exist.
- Heritage is an asset: More than one million EF-Series installations establish credibility that newer entrants to the AI storage market cannot match. In procurement processes where a reliable track record is valued, this makes a difference. In hyperscale GPU cluster environments where customers might accept higher risk for maximum performance, it matters less.
Competitive Landscape
The high-performance block storage market for AI and HPC workloads includes both well-established vendors and newer entrants, and competitive differentiation centers on throughput, density, power efficiency, and integration with the NVIDIA GPU software stack.
This section is only available to NAND Research Clients and IT Advisory Members.
Final Thoughts
The new NetApp EF50 and EF80 products deliver a significant hardware upgrade that meets the current needs of AI training and HPC storage. With a 250 percent increase in throughput over the previous generation, along with a 1.5 petabyte 2U density and claimed power efficiency metrics, these systems compete well with the top alternatives for organizations running Lustre or BeeGFS at scale. The built-in observability and non-disruptive operations features demonstrate practical knowledge of how AI clusters run seamlessly in production.
For enterprise customers, neocloud builders, and mid-scale AI infrastructure operators who need proven, high-throughput block storage with reliable parallel file system integration and full enterprise support, the EF50 and EF80 are a well-timed and credible update.
NetApp brings decades of EF-Series reliability to a market where storage has become a key bottleneck for GPU utilization, and these new systems clearly address that challenge at a practical cost while minimizing operational complexity. It’s a nice addition to NetApp’s portfolio.



