Over the past month, VAST Data has announced partnerships with both Google Cloud and Microsoft Azure to deliver its hyperconverged approach to AI data management (which it calls an “AI Operating System”) as a managed service in public cloud environments.
The Google Cloud integration, announced November 11, enables customers to deploy VAST’s platform through Google Cloud Marketplace with validated support for TPU virtual machines.
Its Microsoft Azure collaboration, revealed at Microsoft Ignite on November 18, will provide similar capabilities on Azure infrastructure, including the new Laos VM Series with Azure Boost networking.
Both partnerships address the same fundamental challenge: enabling enterprises to run AI workloads across hybrid environments without data migration delays or architectural complexity.
VAST is touting these offerings as infrastructure for “agentic AI” workloads, though the company provides limited specificity about what distinguishes agentic AI infrastructure requirements from traditional machine learning operations.
Google Cloud Integration
VAST’s deployment model on Google Cloud enables customers to extend VAST’s DataSpace technology across hybrid environments, creating what the company describes as a unified global namespace spanning on-premises and cloud clusters.
The Google Cloud offering provides several technical capabilities:
Cross-Region Data Access
In its announcement, VAST demonstrated DataSpace connectivity spanning over 10,000 kilometers, linking clusters in the United States and Japan. According to the company, this configuration delivered “near real-time access” to the same datasets in both locations while running inference workloads with vLLM.
The demonstration showed simultaneous operation of AI models on Google Cloud TPUs in the US and GPUs in Japan without data duplication, though VAST has not disclosed latency metrics or bandwidth requirements for maintaining this unified namespace at scale.
TPU Virtual Machine Integration
VAST claims its platform integrates directly with Google Cloud TPU VMs through validated NFS paths.
The company tested model loading performance using Meta’s Llama-3.1-8B-Instruct model and reports achieving “model load speeds comparable to some of the best options available in the cloud” while maintaining “predictable performance during cold starts.”
VAST has not published specific throughput numbers, latency measurements, or comparative benchmarks against native Google Cloud storage options.
Data Movement Optimization
The platform uses what VAST describes as “intelligent streaming” to present datasets across environments without full replication. The company claims this approach reduces egress costs by streaming only required data subsets rather than replicating entire datasets.
Cost savings will depend heavily on workload access patterns, data locality, and the balance between compute and egress pricing in specific use cases.
Platform Components
The Google Cloud deployment includes VAST’s full stack:
- DataStore for unified file/object/block access
- DataBase for transactional and analytical workloads
- InsightEngine for compute services
- AgentEngine for workflow orchestration
- DataSpace for cross-cluster federation.
The company has not detailed how these components map to underlying Google Cloud infrastructure or what dependencies exist on Google Cloud native services.
Microsoft Azure Integration
The Azure partnership announcement provides less technical specificity than the Google Cloud release. VAST describes the integration as “available soon” rather than generally available today.
Here is what we do know about the integration:
Infrastructure Foundation
VAST will run on Microsoft’s Laos VM Series, which incorporates Azure Boost Accelerated Networking technology. This is new hardware from Microsoft, though specific performance characteristics, instance configurations, or availability timelines have not been disclosed.
Hybrid Architecture
Similar to the Google Cloud implementation, the Azure offering will support VAST’s DataSpace technology for creating unified namespaces across on-premises and cloud environments.
The company claims customers will be able to “instantly burst from on-premises to Azure for GPU-accelerated workloads without migration or reconfiguration,” though burst capacity limitations, network requirements, and performance implications at scale remain unspecified.
Storage Architecture
VAST’s Disaggregated, Shared-Everything (DASE) design will enable independent scaling of compute and storage resources within Azure. The architecture includes the company’s Similarity Reduction technology for storage efficiency, though VAST has not published data reduction ratios or specified which workloads benefit most from this capability.
Agentic AI Positioning
Both announcements emphasize support for “agentic AI” workloads. VAST describes InsightEngine as delivering “stateless, high-performance compute and database services” for vector search and RAG pipelines, while AgentEngine orchestrates “autonomous agents operating on real-time data streams.”
The company has not defined what technical requirements differentiate agentic AI from traditional ML infrastructure or why existing data platforms cannot adequately support these workloads.
Analysis
VAST Data’s dual public cloud partnerships shows a significant infrastructure commitment from both Google Cloud and Microsoft Azure.
The managed service delivery model on both platforms addresses real operational challenges enterprises face when running AI workloads across hybrid environments, especially around data mobility, unified governance, and workload placement flexibility.
The Google Cloud integration provides immediate availability with validated TPU support and demonstrated cross-region capability, while the Azure partnership positions VAST for future Microsoft custom silicon initiatives.
Both offerings deliver VAST’s complete platform stack (storage, database, compute services, and data federation) as native cloud deployments rather than self-managed installations.
The partnerships provide clear strategic value for VAST by securing distribution through cloud marketplaces and gaining visibility with enterprises already committed to Google Cloud or Azure.
For Google Cloud and Microsoft, VAST fills a gap in hybrid AI infrastructure, as neither vendor offers comparable unified namespace capabilities for bridging on-premises and cloud environments at this scale. The fact that both cloud providers have endorsed VAST as a managed offering is a significant validation of VAST’s approach and capabilities.
Organizations evaluating VAST’s public cloud offerings should focus on concrete use cases where hybrid data access justifies the additional architectural layer.
Workloads confined to a single cloud region may see limited benefit compared to native storage services. However, enterprises running distributed AI operations across multiple locations, using diverse accelerator types, or maintaining substantial on-premises investments while expanding cloud usage represent the natural fit for VAST’s cross-environment data federation capabilities.
Competitive Outlook & Advice to IT Buyers
VAST’s entry into the mainstream public cloud positions it against both cloud-native storage services and traditional enterprise storage vendors offering cloud extensions.
Google Cloud and Azure both provide native high-performance storage options. This includes Google Cloud Filestore and Parallel Store for file workloads, Azure NetApp Files and Azure HPC Cache for NFS access.
VAST differentiates primarily on hybrid architecture and unified namespace capabilities rather than raw performance metrics.
Let’s take a more detailed comparative look…
These sections are only available to NAND Research clients and IT Advisory Members. Please reach out to [email protected] to learn more.



