At its recent BUILD 2025 event, Snowflake unveiled a substantial array of platform updates centered on three core themes: compute performance optimization, data interoperability across heterogeneous environments, and operational automation.
The offerings address persistent enterprise challenges around data fragmentation, manual infrastructure management, and the operational overhead of supporting both transactional and analytical workloads on unified platforms.
The most significant technical development is Standard Warehouse Generation 2 (Gen2), now generally available, which the vendor claims deliver more than double the analytics performance through hardware and software upgrades to its compute engine.
This release arrives alongside Snowflake Adaptive Compute, a warehouse management automation service entering private preview that dynamically adjusts resources without manual configuration.
On the data integration front, Snowflake is expanding its enterprise lakehouse capabilities through enhanced Apache Iceberg interoperability, the general availability of Openflow (a managed data ingestion service based on Apache NiFi), and the introduction of Snowflake Postgres following its Crunchy Data acquisition.
The company is also strengthening governance capabilities through AI-powered features in Horizon Catalog, including automated PII detection and redaction, alongside enhanced security measures such as immutable backups and Tri-Secret Secure encryption support for Hybrid Tables. Performance improvements extend to streaming workloads.
Compute Performance & Infrastructure Automation
Snowflake’s Standard Warehouse Gen2 sees a fundamental upgrade to the platform’s compute architecture. The company reports performance improvements exceeding 2x for general analytics workloads, with write-heavy and update-intensive operations achieving 2x-4x faster execution.
These improvements apply automatically to existing workloads without requiring query modifications or application changes, addressing a common enterprise concern about migration complexity.
The performance gains derive from several complementary enhancements:
- Optima Indexing: Now generally available, this capability analyzes workload patterns and proactively identifies recurring point-lookup queries suitable for acceleration.
- Automated optimization: The system continuously monitors query patterns and applies optimizations in the background, reducing the need for manual tuning
- Resource scaling: The architecture scales computing resources proportionally with data volume growth and workload complexity increases
Snowflake Adaptive Compute, entering private preview, introduces “adaptive warehouses” that eliminate manual warehouse sizing, concurrency configuration, and multi-cluster management.
The service dynamically adjusts resources behind the scenes to optimize the cost-performance tradeoff.
This automation aligns with broader cloud platform trends toward self-managing infrastructure, though organizations should evaluate how loss of manual control affects their specific performance requirements and cost management strategies.
Data Integration & Ingest
Snowflake Openflow, now generally available, provides managed data integration based on Apache NiFi. The service supports both batch and streaming workloads through hundreds of pre-built connectors and includes native integration with Snowpipe Streaming.
The company positions Openflow as addressing the disproportionate time data engineering teams spend managing ingestion pipelines, though organizations must weigh the benefits of Snowflake’s managed service against existing investments in third-party ETL tools and custom pipeline infrastructure.
Openflow offers two deployment models:
- Bring Your Own Cloud (BYOC): Generally available on AWS, allowing organizations to deploy within their own cloud environments
- Snowflake-managed deployment: Now generally available on AWS and Azure through Snowpark Container Services, providing fully integrated infrastructure management without requiring network configuration or security boundary management
The platform also expands its integration ecosystem through several strategic partnerships:
- Oracle CDC collaboration: Entering public preview, this integration enables near real-time change data capture built on Openflow for continuous streaming of transactional updates
- SAP Snowflake: In private preview, this extends SAP Business Data Cloud with bidirectional integration and zero-copy data sharing
- Salesforce integration: Generally available with zero-copy architecture powered by Snowflake Intelligence
- dbt projects on Snowflake: Now generally available, allowing data engineers to build, test, deploy, and monitor dbt transformation projects directly within Snowflake
Streaming performance has improved substantially with Snowflake Streaming V2, generally available on AWS with Azure and GCP availability planned.
Snowflake claims a 56% reduction in query completion time and improved end-to-end latency based on TPC-DS benchmark testing, alongside a more predictable usage-based pricing model.
The platform now supports ingestion rates up to 10 GB/second with data available for querying within 10 seconds after ingest, addressing requirements for operational analytics, real-time personalization, and time-sensitive decision-making.
Enterprise Lakehouse & Iceberg Interoperability
Snowflake is advancing its enterprise lakehouse architecture through expanded Apache Iceberg support and cross-engine interoperability.
Horizon Catalog now incorporates open APIs from Apache Polaris (Incubating) and Apache Iceberg REST Catalog, enabling external engines to access Snowflake-managed Iceberg tables without requiring separate Apache Polaris accounts, additional user management, or duplicate security configurations.
The enhanced Iceberg capabilities include:
- External engine read access: Entering public preview, allowing query engines supporting the Iceberg REST protocol to directly access Snowflake-managed Iceberg tables
- External engine write access: In private preview, enabling external engines to create, update, and manage data in Iceberg tables
- Apache Iceberg V3 support: Private preview includes new variant and geospatial data types, expanding use case coverage
- Zero-ETL data sharing: Generally available for Apache Iceberg and Delta Lake tables across different metadata catalogs
- Catalog-linked databases: Synchronization with Iceberg-based metadata catalogs including Apache Polaris, Snowflake Open Catalog, and AWS Glue
Organizations gain flexibility to use multiple query engines against a single copy of data, though this introduces complexity around performance optimization, consistency management, and troubleshooting across heterogeneous engine environments.
The company has added Business Continuity and Disaster Recovery (BCDR) capabilities for managed Iceberg tables, now in public preview, allowing asynchronous replication of account objects and databases across regions and clouds with failover group formation.
Snowflake Postgres & Transactional Workload Support
Following its acquisition of Crunchy Data, Snowflake is introducing Snowflake Postgres (entering public preview) as a fully-managed service bringing PostgreSQL to the Snowflake platform.
The service maintains full compatibility with open source Postgres, allowing organizations to run operational workloads without code rewrites while preserving support for existing Postgres extensions, ORMs, and client frameworks.
The architectural integration addresses a longstanding challenge: the separation of transactional data in Postgres from analytical data has forced costly data movement and prevented real-time data access for applications and AI agents.
By bringing transactional Postgres data within the same secure foundation as analytics and AI workloads, Snowflake aims to enable organizations to build context-aware AI agents and intelligent applications using fresh transactional data on a unified platform.
Complementing the managed service, Snowflake has open sourced pg_lake, now generally available, consisting of Postgres extensions that enable developers to query, manage, and write to Apache Iceberg tables using standard SQL from their Postgres environment. This allows Postgres to interact directly with lakehouse data in object storage through support for CSV, Parquet, and JSON file formats.
Snowflake Unistore, powered by Hybrid Tables, is now generally available on Microsoft Azure, enabling organizations to build lightweight transactional applications on Snowflake.
The platform has added security enhancements including Tri-Secret Secure (TSS) encryption support (generally available on AWS, public preview on Azure) and automatic periodic rekeying (generally available on both platforms) to help organizations meet regulatory requirements.
Governance, Security, and Resilience Enhancements
Snowflake Horizon Catalog has been enhanced with several AI-powered governance capabilities and security features addressing enterprise compliance requirements.
The new Copilot for Horizon Catalog leverages the Snowflake Cortex AI platform to answer security and governance questions through a conversational interface, reducing the technical expertise required for compliance activities.
Key security and governance additions include:
- AI Redact: Public preview AI SQL function detecting and redacting personally identifiable information (PII) in unstructured data, addressing a significant barrier to enterprise AI adoption
- Data Security Posture Management: Public preview UI in Trust Center for managing and automating sensitive data detection, tagging, protection, and monitoring
- Trust Center Extensions: Public preview capability allowing integration of third-party security scanners tailored to specific compliance requirements, with extensions shareable via Snowflake Marketplace
- AI Observability Tools: Generally available, providing real-time diagnostics and performance insights across the Snowflake environment
- External data lineage visibility: Public preview feature tracking data provenance across external systems
- Anomaly detection UI: Public preview centralized security anomaly monitoring across all organizational accounts with automated alerting
Snowflake Backups, approaching general availability, introduce point-in-time, immutable snapshots that cannot be altered or deleted once created, even by administrators.
This capability addresses ransomware protection, regulatory cyber resilience standards, and data integrity requirements for auditing or legal purposes. The backup functionality integrates with Snowflake’s account replication capabilities, allowing both backup sets and policies to replicate across regions and cloud providers for comprehensive disaster recovery.
Performance Optimization and Cost Management
Beyond Gen2 warehouse improvements, Snowflake also introduced several capabilities focused on query performance and cost visibility.
Dynamic Tables, which allow defining desired state through a single SQL query, now support immutability features (generally available) that lock specific table areas during refreshes, reducing recomputation requirements and associated costs. Dynamic Iceberg tables (generally available) enable data storage in external cloud storage while maintaining Snowflake management.
Cost management capabilities have expanded to address the granular monitoring requirements of shared resource environments:
- Granular cost allocation: Private preview SQL-based capability for performing detailed cost allocation of shared resources across organizational accounts
- Tag-based budgeting: Entering private preview, allowing budget setting for users of shared resources with consumption monitoring at the user level to prevent overruns
- Usage-based pricing for streaming: More predictable cost model for Snowflake Streaming V2 workloads
AI-Powered Migration and Development Tools
SnowConvert AI has been enhanced with capabilities addressing the complexity and risk of legacy system migrations. AI-powered code verification and repair, entering public preview, automates the testing and repair of converted code to ensure accuracy before deployment.
Automated and incremental code validation, now generally available, checks converted code for semantic equivalence in smaller increments to increase data confidence.
The migration support extends beyond database conversion to include end-to-end ecosystem migration:
- Legacy ETL migration: Public preview support for modernizing extraction, transformation, and loading pipelines
- BI repointing: Public preview capability for updating business intelligence reports to reference new database systems
- Consistency across environments: Comprehensive migration approach reducing risk and maintaining consistency throughout the data ecosystem
Snowflake Workspaces provides a new file-based development environment featuring AI-assisted coding, Git integration, and side-by-side code comparison capabilities.
This environment supports native dbt project development within Snowflake, allowing data engineers to focus on delivering insights rather than maintaining infrastructure.
Analysis
At BUILD 2025, Snowflake delivered a comprehensive set of enhancements that address fundamental enterprise challenges in data infrastructure management, cross-system interoperability, and operational automation.
The technical breadth spans compute optimization, managed data integration, transactional database support, and AI-powered governance, showing the company continuing its shift from specialized cloud data warehouse toward comprehensive data platform.
The most immediately impactful capabilities provide tangible benefits for organizations struggling with fragmented data ecosystems and manual operational overhead.
The reported performance improvements, combined with automated warehouse management, address real pain points around query optimization and resource allocation.
The competitive landscape has shifted toward platform convergence rather than clear product category boundaries, with Snowflake, Databricks, and major cloud providers increasingly offering overlapping capabilities with different optimization points.
Snowflake’s announcements from BUILD 2025 show the company continuing to deliver meaningful value for enterprises data architects. Combining the substantial performance gains, intelligent automation, comprehensive data integration capabilities, and robust governance frameworks, promises to deliver genuine value for organizations seeking to accelerate AI adoption while maintaining enterprise-grade security and compliance.
Modern data-driven enterprises, particularly those constrained by fragmented architectures, manual operational workflows, or governance gaps impeding AI initiatives, should actively evaluate how Snowflake’s evolved capabilities align with their strategic technology roadmaps.
The platform has matured beyond specialized analytics to address end-to-end data lifecycle requirements, enabling enterprise customers to transform data infrastructure from operational burden into competitive advantage capable of supporting increasingly sophisticated AI-driven business models at enterprise scale. It’s a nice set of capabilities.
Competitive Outlook & Advice to IT Buyers
Snowflake’s BUILD 2025 announcements position the company more directly against multiple categories of competitors, though each advancement brings both strategic opportunities and implementation challenges that organizations must carefully evaluate…
These sections are only available to NAND Research clients and IT Advisory Members. Please reach out to [email protected] to learn more.



