Here's a not-so-secret about software companies: the best ones never forget about hardware.
At VAST, we learned early that truly efficient systems aren't born from elegant code alone. They emerge from the friction-free marriage of software and silicon. Our DASE architecture proved this point. It only became possible when NVMe-oF arrived, giving us the low-latency connection we needed to disaggregate capacity from compute without compromise.
That insight naturally led us to NVIDIA BlueField.
The Ceres Gambit
When we first started developing our second-generation storage enclosure (the Ceres DBox), we made a deliberate choice to let BlueField DPUs run the entire system. They do more than accelerate I/O or offload networking. They manage the box itself, removing the need for an x86 server motherboard and eliminating its cost and power draw. Not only does BlueField give us a fully programmable control plane to handle enclosure telemetry and drive orchestration, but the VAST AI Operating System container runs on the BlueField processor, moving data between the enclosures' SSDs and the NVMe fabric.
This marked the beginning of a longstanding collaboration with NVIDIA, one that has enabled us to harness the BlueField technology to accelerate AI storage and set new benchmarks for performance and efficiency across our data platforms.
The payoff came quickly. When we refreshed Ceres with BlueField-3, performance jumped while the box consumed less power and required fewer network connections. It was the rare upgrade that improved throughput and efficiency simultaneously.
The Isolation Equation
Some of our cloud provider customers deploy BlueField DPUs both within their client environments and inside their VAST clusters to establish a provable network isolation layer between tenants. BlueField DPU authenticates into each tenant’s virtual private network and presents a virtual NIC to the host operating system. This ensures the host remains connected only to that tenant’s network and no other, even when the tenant has full bare-metal access to a GPU server.
At the hardware level, BlueField enforces this boundary using built-in identity and control mechanisms that prevent any cross-traffic between virtual networks. Because administrative access to the DPU itself is reserved for the cloud provider’s administrators, there is no possibility of crosstalk or data leakage between tenants. The result is an unimpeachable isolation model that brings true zero-trust security to shared AI infrastructure.
Enter NVIDIA BlueField-4
Now comes NVIDIA BlueField-4, making a major leap forward in data acceleration and infrastructure innovation.
The platform delivers greater computing power, a jump from 400 Gbps to 800 Gbps networking, and support for the next generation of larger and faster SSDs in even denser enclosures. BlueField-4 integrates the NVIDIA Grace CPU, delivering six times the compute horsepower of previous generations. This advancement opens new possibilities for data-plane optimization, richer in-band analytics, and real-time telemetry inside the DBox. The additional compute headroom enables smarter enclosure management, predictive maintenance, and dynamic workload placement based on live telemetry.
Each generation of NVIDIA BlueField has fit cleanly into our DASE architecture without requiring code-level rewrites. The abstraction layers within the VAST AI OS manage the differences in cores, bandwidth, and firmware automatically. Our customers inherit the performance and efficiency benefits the moment new hardware is introduced.
The Road Ahead
BlueField has given us a stable, purpose-built foundation without the overhead of a general-purpose server. It lets our software orchestrate storage, networking, and security as services of one integrated platform rather than separate stacks. Storage is now a subcomponent of a larger intelligent system that operates more efficiently and securely when powered by NVIDIA BlueField technology.
As BlueField continues to evolve, our software evolves with it. Every new generation expands what we can orchestrate in hardware and what we can simplify in software. That is the collaboration that keeps advancing the state of the art in AI infrastructure.



