Tacnode
Back to Blog
Data Engineering

Snowflake Dynamic Tables vs Materialized Views: When Each Works, When Neither Does

Snowflake Dynamic Tables and Materialized Views both promise fresh derived state without manual pipelines. This guide covers how each refresh mechanism actually works, when to choose Dynamic Tables over Materialized Views, refresh mode and target lag tradeoffs, monitoring and compute costs, and where both fall short for real-time decisioning workloads.

Boyd Stowe
Boyd Stowe
Solutions Engineering
13 min read
Snowflake Dynamic Tables vs Materialized Views comparison showing incremental refresh, target lag, and the real-time decisioning gap both leave open

TL;DR: Snowflake Dynamic Tables use a scheduled `TARGET_LAG` parameter (minimum 1 minute) to automatically refresh a SQL query that can include joins, unions, and window functions across multiple tables. Materialized Views are limited to a single base table without joins, maintained asynchronously by Snowflake, and automatically used by the query optimizer to rewrite queries transparently. Dynamic Tables replace the streams-and-tasks pattern for multi-level ELT pipelines. Materialized Views improve query performance on frequent aggregations. Both introduce latency that makes neither suitable for sub-second real-time decisioning use cases like fraud scoring, agent state, or session-level personalization.

Snowflake Dynamic Tables and Materialized Views both solve the same underlying problem: keeping derived state fresh without building and maintaining a data pipeline by hand. Both let you write a SQL query against source tables and get back something Snowflake keeps current as the underlying data changes. Both abstract away the scheduling, the incremental computation, and the orchestration.

But they are not interchangeable. Dynamic Tables are Snowflake’s more recent and more flexible option — they support joins, unions, window functions, and multi-level data pipelines — and they run on a scheduled refresh controlled by a `TARGET_LAG` parameter. Materialized Views are older, restricted to a single base table with no joins, and are maintained asynchronously by a background service that the Snowflake query optimizer can automatically rewrite queries against.

The comparison matters because choosing the wrong one introduces either unnecessary operational complexity or a ceiling on query performance you didn’t design for. And there’s a third case the comparison misses entirely: workloads where neither is fresh enough, regardless of which you pick.

What Are Snowflake Dynamic Tables?

Snowflake Dynamic Tables are declarative database objects that materialize the results of a SQL query and automatically refresh as the underlying data changes, using a `TARGET_LAG` parameter to specify the maximum acceptable delay between source updates and when those updates appear in the dynamic table. Unlike Materialized Views, Dynamic Tables support complex queries with joins, unions, and window functions across multiple tables, making them suitable for multi-level data pipelines and ELT transformations.

A dynamic table is a table in the physical sense — it occupies storage, has a defined schema, and is queryable like any traditional table. What makes it dynamic is that its contents are defined by a SQL query against one or more source tables, and Snowflake keeps those contents current on a schedule you control. You never write to a dynamic table directly. You write to the source tables, and the dynamic table updates automatically.

Dynamic Tables were introduced to replace an operational pattern that had grown common and painful: chaining streams and tasks to build multi-stage transformation pipelines inside Snowflake. That pattern worked but required explicit procedural code — create a stream on each source, write tasks that merge changed data into target tables, schedule the tasks, handle dependencies between them, monitor for failures. Dynamic Tables collapse that entire workflow into a single declarative definition.

Refresh Modes and Target Lag

The core of Dynamic Tables is the refresh mechanism. Rather than computing the defined query on every read, Snowflake pre-computes and stores the results, then maintains them as source data changes. How that maintenance happens is controlled by two parameters: refresh mode and target lag.

Dynamic Tables support three refresh modes. Incremental refresh mode is the default when possible — Snowflake tracks change data capture on the source tables, identifies only the rows that changed since the last refresh, and applies just those changes to the dynamic table. This is the same class of mechanism that powers incremental materialized views in other systems — incremental processing is significantly cheaper than recomputing the full result and is what makes Dynamic Tables viable for large source tables. Most simple SELECT queries, joins, and aggregations support incremental refresh. Full refresh mode recomputes the entire table from scratch on each refresh — used when the defined query contains operations that can’t be updated incrementally (certain window functions, non-deterministic functions, or patterns where tracking changed data would be more expensive than recomputing). More expensive but always correct. Auto refresh mode lets Snowflake choose between incremental and full automatically based on query complexity, and is the recommended default unless you have a specific reason to pin one mode.

You set the refresh mode when you create the dynamic table:

sql
CREATE OR REPLACE DYNAMIC TABLE orders_enriched
  TARGET_LAG = '5 minutes'
  WAREHOUSE = transform_wh
  REFRESH_MODE = INCREMENTAL
AS
  SELECT
    o.order_id,
    o.customer_id,
    c.customer_tier,
    o.amount,
    o.created_at
  FROM orders o
  JOIN customers c ON o.customer_id = c.customer_id;

`TARGET_LAG` is the maximum amount of time you want the data in the dynamic table to lag behind changes in the source. It’s expressed as an integer and a time unit — `’1 minute’`, `’5 minutes’`, `’1 hour’`, `’1 day’`. Snowflake schedules refreshes to meet that target as the data freshness vs latency contract. The minimum target lag for a Snowflake dynamic table is 1 minute. You cannot set a target lag of seconds. This is a hard platform floor, not a per-table configuration choice. Anyone who tells you Dynamic Tables can refresh in seconds is confusing `TARGET_LAG = ’60 seconds’` (which resolves to 1 minute) with actual sub-minute refresh behavior. The refresh service that manages all dynamic tables across the platform has operational limits that prevent going below one-minute cadence.

Target lag values also apply downstream. When you chain dynamic tables, you can set the target lag of downstream dynamic tables to `DOWNSTREAM`, which means the downstream table refreshes only when its upstream dependency does. This lets you define a pipeline where only the final dynamic table has a specific time schedule, and everything upstream refreshes on demand to meet that schedule efficiently.

Target lag is a direct input to compute costs. A shorter target lag means more frequent refreshes, which means more warehouse compute consumed per hour. A target lag of one minute means Snowflake attempts a refresh roughly every 30 seconds to stay ahead of the lag window. A target lag of one hour means refreshes happen every 30 minutes or so. Storage costs are separate and come from storing the materialized results plus Time Travel and fail-safe data retention.

You can alter a dynamic table after creation — change the target lag, change the warehouse, suspend or resume refreshes — using `ALTER DYNAMIC TABLE`. You cannot change the defined query after creation; to change the query, you drop and recreate:

sql
-- Change the target lag time
ALTER DYNAMIC TABLE orders_enriched SET TARGET_LAG = '10 minutes';

-- Suspend automatic refreshes
ALTER DYNAMIC TABLE orders_enriched SUSPEND;

-- Trigger a manual refresh
ALTER DYNAMIC TABLE orders_enriched REFRESH;

-- Resume scheduled refreshes
ALTER DYNAMIC TABLE orders_enriched RESUME;

Dynamic Tables vs Materialized Views vs Streams and Tasks

This is the comparison most teams come to Dynamic Tables from. Both Materialized Views and Dynamic Tables maintain derived state automatically. The differences sit in what they can express, how Snowflake maintains them, and how the query optimizer uses them.

Query complexity. Materialized Views in Snowflake have significant restrictions — must be defined on a single base table, can’t contain joins, unions, or subqueries, can’t use non-deterministic functions or external functions, can’t use window functions that depend on an ordering that changes as data arrives. These restrictions exist because Snowflake maintains materialized views incrementally against a single source, and the maintenance logic is only tractable for a narrow set of SQL patterns. Dynamic Tables support almost the full power of Snowflake SQL — multi-table joins and unions, window functions, stored procedures and external functions, references to other dynamic tables for multi-level pipelines. This is the single biggest reason to choose Dynamic Tables over Materialized Views when the derived state you want requires anything more than a simple aggregation on one table.

Maintenance model. Materialized Views are maintained by a Snowflake background service that runs continuously and asynchronously. There is no `TARGET_LAG` parameter for materialized views — Snowflake decides when to refresh based on background-service scheduling, cluster availability, and load. Under heavy ingest, a materialized view can lag behind its base table by several minutes. It is not a real-time mechanism, even though it is continuously maintained. Dynamic Tables are maintained by scheduled refreshes that you control via target lag. You get an explicit contract on data freshness — the dynamic table is no more than `TARGET_LAG` behind the source data. The tradeoff is that you are now making an explicit choice between freshness and compute cost, which materialized views hide from you.

Query optimizer integration. The Snowflake query optimizer can automatically rewrite queries to use Materialized Views when a query against the base table can be answered more efficiently from the materialized view. You don’t have to reference the materialized view by name. This is one of the most useful properties of materialized views and a reason to prefer them for frequently-run aggregate queries. The query optimizer does not automatically rewrite queries to use Dynamic Tables — a dynamic table is a standalone object, you query it by name. Dynamic tables do not accelerate queries against their source tables; they are a separate serving layer.

Choose Materialized Views when you have a high-volume query pattern that runs the same aggregation against one large table, and you want Snowflake to transparently improve query performance whenever that pattern is detected. Choose Dynamic Tables when you are building a multi-stage transformation pipeline with joins, unions, or window functions, and you want a declarative alternative to streams and tasks.

Before Dynamic Tables existed, the standard way to build a multi-level ELT pipeline in Snowflake was streams and tasks. A stream is a change tracking mechanism that records DML changes to a table. A task is a scheduled or triggered unit of work that runs SQL. The pattern: create a stream on each source table, write a task that consumes the stream and merges the changed data into a target table, chain tasks so downstream tables update after upstream tables, and monitor the entire thing. Dynamic Tables subsume this — instead of procedural code that defines how to process changes, you write declarative code that defines what the output should look like. Snowflake handles change data capture, figures out the dependency graph, and orchestrates refreshes automatically. The tradeoff is flexibility — tasks can do arbitrary work (including stored procedures, external calls, dynamic SQL), while Dynamic Tables are limited to what a single SELECT statement can express. For pure transformation pipelines, Dynamic Tables are almost always the right choice now. For pipelines that need procedural logic or external integrations, tasks are still necessary.

Snowflake Hybrid Tables are a separate option worth understanding alongside this comparison. Hybrid Tables combine a row-store operational layer with Snowflake’s existing columnar analytics layer — they’re designed for hybrid transactional and analytical workloads (Snowflake Unistore) where you need both point lookups and analytics on the same table. Hybrid Tables are not a replacement for Dynamic Tables. Dynamic Tables materialize derived state from other tables; Hybrid Tables are the primary storage for transactional data. You might use a Hybrid Table as the source for a dynamic table — capturing transactional writes in the Hybrid Table, then materializing aggregates or joins into a dynamic table. Hybrid Tables, Iceberg tables, external tables, and traditional tables can all serve as source tables for dynamic tables.

DimensionMaterialized ViewsDynamic Tables
Query supportSingle base table, no joinsMulti-table, joins, unions, window functions
Refresh controlAsync background serviceScheduled via TARGET_LAG (min 1 min)
Query optimizer rewriteYes, automaticNo, referenced by name
Freshness contractNo explicit SLATARGET_LAG contract
ChainingNot supportedSupported (downstream dynamic tables)
Best forAccelerating common aggregationsMulti-stage ELT pipelines

Limitations and Costs

Dynamic Tables solve real problems, but they have real limits. The most important to understand:

Refresh frequency floor. Target lag cannot go below 1 minute. If your use case needs sub-minute freshness, Dynamic Tables are not the mechanism.

No automatic query rewrite. The query optimizer does not redirect queries against source tables to the dynamic table. You must query the dynamic table by name. Different from Materialized Views.

Query definition is immutable. You cannot alter the defined query of an existing dynamic table. Schema or query changes require dropping and recreating, which triggers a full initial refresh against the source data.

Compute cost scales with refresh frequency. A 1-minute target lag on a heavy query consumes warehouse compute continuously. Without careful dedicated warehouse sizing, cost can become significant.

Incremental refresh isn’t always possible. Some query patterns require full refresh mode, which recomputes the entire table on each refresh. For large source tables, this can be expensive enough that Dynamic Tables stop being economical.

No cross-region replication. Dynamic Tables live in a single region and cannot be automatically replicated across regions the way Snowflake databases can.

Chaining complexity. While downstream dynamic tables simplify multi-stage pipelines, debugging a chain of them can be harder than debugging individual tasks because the refresh orchestration is more opaque.

For monitoring, the easiest path is the Snowsight UI — open a dynamic table and check the Refresh History tab for per-refresh detail, or use the Monitoring tab to see all dynamic tables at once. For programmatic monitoring, `SHOW DYNAMIC TABLES` and the Snowflake `INFORMATION_SCHEMA` views provide metadata including target lag, refresh mode, latest refresh status, and size. For cost management, Snowflake recommends using a dedicated warehouse for dynamic table refreshes — it makes it easier to attribute cost per table and lets you size the warehouse independently from your interactive query workload. If you run all dynamic tables against a shared warehouse, you lose visibility into which specific tables are driving cost, and you risk refresh contention with interactive queries. Storage costs for dynamic tables include the materialized results themselves, Time Travel data retention, and fail-safe storage — for a dynamic table with a large footprint and frequent refreshes, Time Travel storage alone can exceed the base table storage.

When Dynamic Tables Work Well — And When They Don’t

Dynamic Tables are the right choice for a specific category of workload: multi-stage derived state that can tolerate minute-level or longer lag and that benefits from a declarative refresh model. Several use cases are a natural fit:

Multi-level data pipelines where raw data flows through cleansing, enrichment, aggregation, and modeling stages before landing in reporting tables — the classic medallion architecture pattern. Each stage becomes a dynamic table; downstream dynamic tables chain off upstream ones with downstream target lag, and the entire pipeline is maintained by the refresh service.

Cross-source transformations where the output combines data from multiple tables via joins and unions that materialized views can’t express. The dominant use case and the place Dynamic Tables clearly beat every older Snowflake mechanism.

ELT architectures where raw ingestion lands in source tables (often from Snowpipe or Snowpipe Streaming) and transformation happens inside Snowflake. Dynamic Tables replace the streams-and-tasks pattern with a much simpler definition.

Historical data backfills where you want a single definition that produces both the initial backfill and the ongoing incremental updates. The first dynamic table refresh computes the full result; subsequent refreshes use incremental processing.

Reporting tables and dashboards that need fresher data than nightly batch but don’t need sub-minute freshness. A 5-minute or 15-minute target lag is a common sweet spot.

There’s a class of workload where neither Dynamic Tables nor Materialized Views are an adequate solution, though, and it’s important to recognize when you’re looking at one. These are workloads where derived state must be available within the time budget of an automated decision — typically 100 to 500 milliseconds — and where real-time decisions commit before any correction is possible. Fraud scoring during card authorization. Per-transaction credit checks. AI agents making bounded-autonomy decisions against a shared account. Live personalization that must reflect current session state mid-checkout. In each case, the decision cannot wait for a 1-minute refresh cycle, and acting on stale derived state produces a wrong answer that can’t be undone.

Dynamic Tables have a 1-minute target lag floor. Even the best-case refresh cadence puts the dynamic table behind the source data by up to a minute. Materialized Views are maintained asynchronously with no explicit SLA, and under heavy ingest can lag even longer. Snapshot isolation on refresh ensures that when the refresh does complete, the result is consistent against a point-in-time snapshot — but that point-in-time is in the past by the time the decision engine queries it. The underlying issue is that both options are designed for serving derived state, not for decisioning on derived state. They are refresh mechanisms layered on top of a data warehouse architecture whose storage and compute model assumes queries happen after data settles. Sub-second decisioning inverts that assumption: the query happens while data is still changing, and the derived state must reflect changes that are actively occurring.

For workloads in this category, the answer is not a faster Snowflake refresh — the answer is a different kind of system. A real-time decisioning architecture maintains shared context — the aggregates, velocity counters, and derived state the decision engine needs — with sub-millisecond freshness and cross-system consistency guarantees that a scheduled refresh can’t provide. The mechanism: change data capture from source-of-truth databases feeds incremental materialized views maintained inside the transactional boundary of a single system — the same primitives Snowflake uses for Dynamic Tables, but applied at millisecond granularity rather than on a minute-floored refresh schedule, with point lookups, aggregations, and similarity search all served from one consistent snapshot. This is the context lake approach: treat derived state as first-class infrastructure maintained under the decision’s validity window, not as a batch-refreshed artifact of the analytics stack.

This doesn’t mean Snowflake goes away. Most organizations running real-time decisioning also run Snowflake for analytics, reporting, and batch training of the models that drive decisions. Dynamic Tables and Materialized Views are excellent for those workloads. They are simply the wrong abstraction for the decision itself.

FAQ

Summary: Pick the Right Tool for the Freshness Window

Snowflake Dynamic Tables and Materialized Views solve the same abstract problem — automatic maintenance of derived state — but they’re designed for different workloads and operate on different maintenance models. Materialized Views are ideal when you have frequent aggregate queries against a single table and you want Snowflake to improve query performance transparently. Dynamic Tables are ideal when you’re building multi-stage ELT pipelines with joins, unions, and window functions, and you want an explicit freshness contract via target lag.

Neither is the right answer for sub-second real-time decisioning. Dynamic Tables have a 1-minute refresh floor; Materialized Views have no explicit SLA and can lag several minutes under load. Both are designed to serve derived state, not to support decisions that commit against derived state within milliseconds.

If your workload is analytics, reporting, or ELT — Dynamic Tables or Materialized Views will probably do what you need. If your workload is automated decisioning against shared state under concurrency, you need an architecture that treats data freshness as a correctness requirement rather than a performance optimization — the live context side of your stack, not the analytical side. Recognizing which category your workload falls into is the first step. The second is picking the right tool for the freshness window you actually have.

SnowflakeDynamic TablesMaterialized ViewsChange Data CaptureData PipelinesReal-Time DataData Engineering
Boyd Stowe

Written by Boyd Stowe

Former Couchbase and IBM. Two decades helping enterprises adopt new database paradigms.

Ready to see Tacnode Context Lake in action?

Book a demo and discover how Tacnode can power your AI-native applications.

Book a Demo