AI Storage Breakthrough Sets New Industry Standards
DDN, a global leader in data storage and AI solutions, has unveiled a new storage system that pushes the boundaries of performance and efficiency. The AI400X3 is built to handle the most demanding AI workloads at large scale, offering a compact and energy-efficient design without sacrificing power. It’s designed to meet the needs of modern AI projects, providing faster insights and reducing operational costs.
Powerful Performance Backed by Industry Benchmarks
The AI400X3 runs on DDN’s advanced EXAScaler parallel file system, which ensures rapid, reliable data access. It recently achieved top results in the latest MLPerf Storage v2.0 benchmarks, demonstrating its ability to saturate hundreds of simulated GPUs across various AI tasks. This means large companies can accelerate their AI research and deployment while maintaining efficiency and sustainability.
In real-world testing, the system showed impressive results in both single-node and multi-node setups. Using just a 2U, 2400W appliance, it served 52 and 208 simulated H100 GPUs during Cosmoflow and Resnet50 training. It achieved read speeds of 30.6GB/s and write speeds of 15.3GB/s, with checkpoint load times for Llama3-8b as quick as 3.4 seconds. These results highlight how compact hardware can deliver massive performance.
Scaling AI Workloads with Ease
When tested across multiple nodes, the AI400X3 sustained over 120GB/s of read throughput. This level of performance is crucial for AI teams working with large GPU clusters, as it ensures consistent, high-speed data access during intensive training sessions. Such throughput underpins faster model training and experimentation, helping organizations stay competitive.
The benchmark results set a new standard for AI storage systems, proving that high performance can come in small packages. The AI400X3’s ability to handle both small-scale and large-scale AI tasks makes it a versatile choice. It’s ideal for startups just beginning their AI journey as well as enterprises running complex, distributed training projects.
According to Sven Oehme, CTO at DDN, the company’s focus was on creating infrastructure that combines precision, speed, and reliability. The results from MLPerf Storage 2025 show that DDN’s system can keep pace with advanced GPUs, often exceeding expectations, all within a power-efficient footprint. It’s a game-changer for AI data storage and management.
This breakthrough isn’t just about raw speed. It’s about redefining what’s possible in AI infrastructure by enabling faster insights, reducing costs, and supporting larger, more complex models. Whether for small teams or global organizations, the AI400X3 makes high-performance AI more accessible and sustainable than ever before.















What do you think?
It is nice to know your opinion. Leave a comment.