SET
statement in a SQL script to configure aspects of Firebolt’s system behavior. Each statement is a query in its own right and must be terminated with a semi-colon (;). The SET
statement cannot be included in other queries. This topic provides a list of available settings by function.
Setting via WITH
You can override settings by appending WITH (<setting_1_name> = <setting_1_value>, ...)
to the query. This lets you apply settings directly to specific queries without affecting the entire session.
Example
Instead of:Supported Commands
TheWITH
clause is supported for the following commands:
Supported Settings
The following settings can be configured using theWITH
clause and the SET
command:
- timezone
- standard_conforming_strings
- max_result_rows
- statement_timeout
- cancel_query_on_connection_drop
- enable_result_cache
- enable_subresult_cache
- enable_scan_cache
- insert_sharding
- tablet_min_size_bytes and tablet_max_size_bytes
- cross_region_request_mode
- optimizer_mode
- enable_storage_statistics
- max_insert_threads
- enable_iceberg_partitioned_scan
SET
command:
Setting the time zone
Use this setting to specify the session time zone. Time zone names are from the Time Zone Database. You can see the list of tz database time zones here. For times in the future, the latest known rule for the given time zone is applied. Firebolt does not support time zone abbreviations, as they cannot account for daylight savings time transitions, and some time zone abbreviations have meant different UTC offsets at different times. The default value of thetimezone
setting is UTC.
Syntax
Example
The following code example demonstrates how setting the timezone parameter affects the interpretation and conversion ofTIMESTAMPTZ
values:
Enable parsing for literal strings
If set totrue
, strings are parsed without escaping, treating backslashes literally. By default, this setting is enabled.
Syntax
Example
The following code example demonstrates how settingstandard_conforming_strings
affects the interpretation of escape sequences in string literals:
Statement timeout
Specifies the number of milliseconds a SQL statement is allowed to run. Any SQL statement or query exceeding the specified time is canceled. A value of zero disables the timeout by default.Syntax
Example
The following SQL example sets the query timeout to three seconds:Limit the number of result rows
When set to a value greater than zero, this setting limits the number of rows returned bySELECT
statements. The query is executed as if an additional LIMIT
clause is added to the SQL query. A value of zero or less means that no limit is applied. By default, no limit to the number of result rows is applied.
Syntax
Example
The following queries all return the same result. For the first query, no explicit settings are set:Query cancellation mode on connection drop
Specify how the query should behave when the HTTP connection to Firebolt is dropped, such as when the UI window is closed. For this, you can choose between 3 different modes:NONE
: The query will not be canceled on connection dropALL
: The query will be canceled on connection dropTYPE_DEPENDENT
: Only queries without side effects will be canceled, such asSELECT
.
TYPE_DEPENDENT
.
Syntax
Example
The following code example demonstrates how to control query cancellation behavior when a connection drops usingnone
, all
, and type_dependent
modes for SET cancel_query_on_connection_drop
:
Query labeling and tagging
Use this option to label your query with a custom text. This simplifies query cancellation and retrieving the query status from system tables.Syntax
Example
The following code example assigns a query label to a query usingSET query_label
, allowing you to track it in information_schema
, engine_running_queries
, and information_schema.engine_query_history
. It then demonstrates how to retrieve the QUERY_ID
for the labeled query and cancel it using CANCEL QUERY
:
Multi-cluster engine warmup
Use this option to distribute queries across all clusters of an engine, simplifying the process of initializing cached data to a consistent state across all clusters after aSTART ENGINE
or ALTER ENGINE
operation.
Warmup queries complete after they have run on all clusters of the engine. The queries return an empty result if they succeed on all clusters. If the query fails on any cluster, it returns an error. If multiple errors occur, only one error is returned.
Syntax
Example
The following code example activates the warmup mode so that the query runs onproduction_table
using all clusters of an engine, and returns an empty result upon success:
Result cache
Setenable_result_cache
to FALSE
to disable the use of Firebolt’s result cache, which is set to TRUE
by default. Disabling result cashing can be useful for benchmarking query performance. When enable_result_cache
is disabled, resubmitting the same query will recompute the results rather than retrieving them from cache.
Syntax
Example
The following code example disables the result cache so that no previously cached results are used, and no new cache entries are written:Subresult cache
Firebolt implements advanced cross-query optimization that allows SQL queries to reuse intermediate query execution states from previous requests. Subresult caching operates at a semantic level, which allows Firebolt to understand and optimize queries based on the meaning and context of the data rather than solely based on their syntax or structure. This capability allows Firebolt to optimize across different query patterns for improved efficiency. Setenable_subresult_cache
to FALSE
to disable Firebolt’s subresult caching, which is set to TRUE
by default.
Disabling subresult caching is generally not recommended, as it can negatively impact query performance, especially for complex workloads. For most benchmarking scenarios, disable the result cache instead, as described in the previous Result cache section. This approach affects only the final result caching while preserving the benefits of subresult optimizations.
Syntax
Example
The following code example disables the subresult cache so no previously cached subresult is used and no new cache entries are written by this query:enable_subresult_cache
to FALSE
disables the use of all cached subresults. In particular, it deactivates two caching mechanisms that normally speed up query runtimes: the use of the MaybeCache
operator, which includes the full result cache, and the hash-table cache used by the Join
operator.
Scan cache
The scan cache is a feature that caches all columns accessed by a query right after they have been read by the scan of the underlying table. Note that no filters are applied, also any primary indexes will not be used. The cached columns simply represent the entire data for these fields in the table. Since they are in RAM once cached, certain queries / workloads may benefit from this. E.g., if there are many consecutive queries which access the exact same fields, but have changing filters. Setenable_scan_cache
to TRUE
to enable Firebolt’s scan caching, which is set to FALSE
by default. Note that RAM usage may be high when enabling the scan cache and the impact depends a lot on the workload.
Syntax
Example
The following code example enables the scan cache. Once enabled, we run several queries which access exactly the same columns, but change the filters. The first query will populate the cache, the following can use the cached columns.Insert sharding
When working with partitioned tables, Firebolt enforces separation of data between tablets: rows of different partitions cannot be stored together in the same tablet. Consider a scenario where you’re ingesting historical data for the last 3 years with date-based partitioning: this could result in around 1000 tablets. For large datasets, a common practice is to scale out for ingestion. However, this creates a challenge: each date might be processed from multiple nodes: for example, for 10 nodes it can result in up to 10,000 tablets, instead of 1,000. This not only slows down data persistence due to increased storage requests but can also degrade query performance. To address this, Firebolt provides controls for partitioned tables ingestion:-
insert_sharding='shard_on_read'
: Use when the partition expression is based on$source_file_name
. This allows Firebolt to determine the target partition before reading data and group files of the same partition on the same nodes. This is most effective when your source files are already organized by partition (e.g., files named likedata_20240101.csv
,data_20240102.csv
). -
insert_sharding='shuffle_on_write'
: Use when the partition expression is based on the data itself. In this case, data must be read first to determine partitioning. Just before insertion and after any transformations, the data is re-shuffled for partitions locality. Use this when your partition values come from the data content rather than file names.
Notes
- This setting overrides default load-based sharding of input files. Be cautious as a single partition with heavy data could overload a single shard.
- This setting is only available via the
WITH SETTINGS
syntax, not withSET
.
Syntax
Example
The following examples demonstrate when to use each sharding option:insert_sharding
to shard_on_read
changes the file distribution strategy across nodes: with this each date is processed by exactly one node, but only when the partition value can be determined from the source file name.
Target tablet size
During ingestion, Firebolt attempts to create optimally sized tablets to balance ingestion speed and future scan performance. When all ingested data has been read, Firebolt prefers creating relatively small tablets to prioritize data persistence, leaving further optimization to Auto Vacuum. However, if this behavior isn’t desirable, you can control it using thetablet_min_size_bytes
and tablet_max_size_bytes
settings:
tablet_min_size_bytes
: Controls the minimum size of tablets. If there isn’t enough data in the ingestion, smaller tablets are created nevertheless. When possible, data is compacted into tablets of at least this size. Default:1.5 GiB
. Minimum:1 GiB
.tablet_max_size_bytes
: Controls the maximum size of tablets. Default:4 GiB
. Should be greater or equal thantablet_min_size_bytes
.
Note
Larger target tablet sizes may require more memory during ingestion.Example
The following example sets both minimum and maximum tablet sizes to4 GiB
:
tablet_min_size_bytes
and tablet_max_size_bytes
to 4 GiB
ensures that larger tablets are created.
Access Cross-Region Data
By default, Firebolt restricts importing from and exporting to Amazon S3 buckets located outside your engine’s default region to minimize latencies and network costs. For workloads requiring cross-region access, you can enable this functionality using thecross_region_request_mode
setting. Firebolt offers three modes:
disabled
: The default setting. Any attempt to access a bucket in another region will cause the query to fail.auto
: Recommended for most scenarios when you need to access a bucket in a different region. Firebolt will automatically infer each bucket’s region using the AmazonHeadBucket
API and route requests to the appropriate regional endpoint. The region information is cached on the node for one hour. If you’re using a role to authenticate to Amazon S3, it must include ListBuckets permissions.- For Iceberg queries using vended credentials, the approach is different because these credentials typically do not expose
HeadBucket
permissions. In this case, Firebolt will first attempt to resolve the region from the REST catalog; if not possible, it will use the pseudo-regionaws-global
.
- For Iceberg queries using vended credentials, the approach is different because these credentials typically do not expose
enforced
: Use when you need to access a bucket in a different region and do not have the permissions forauto
mode. Firebolt will use the pseudo-regionaws-global
for every request. This may lead to higher latencies since AWS needs to resolve the region internally for each request.
Notes
- Firebolt currently offers limited cross-region data transfer and will throttle requests if the limit is exceeded. The current limit is 100 GB per hour per node.
- This setting is only available via the
WITH SETTINGS
syntax, not withSET
.
Syntax
Changing the optimizer mode
If set touser_guided
, the optimizer will disable all cost-based optimization rules.
This ensures that the relative order of joins and aggregations in the resulting execution plan will follow the sytnatic order defined in the SELECT
blocks of your SQL query
The default value is automatic
.
Syntax
You can use both theSET
WITH
syntax.
Example
The following examples demonstrate how to use theoptimizer_mode
setting to ensure that in the resulting execution plan, the join between r
and s
is executed before the join between r
and t
.
Using storage manager statistics in the optimizer
You can control the use of storage manager statistics in the optimizer with theenable_storage_statistics
session parameter.
Per default, storage manager statistics are enabled (the default value is true
).
If you want to disable them, set the value to false
.
Syntax
Example
Consider the following example. Whenenable_storage_statistics
is set to true
, the optimizer will use the storage manager statistics to estimate the number of rows in the fact table ft
and the dimension tables d1
and d2
.
The storage manager reports that d2
is much smaller (50 rows) compared to d1
(1000 rows).
The optimizer estimates that even after applying the local predicates on these dimension tables, it will be more efficient to join ft
with d2
first and then with d1
.
The join order in this plan will adapt to the size of the underlying tables.
For example, if the size of d2
becomes 400 rows, the optimizer will first join ft
with d1
and then with d2
.
However, when enable_storage_statistics
is set to false
, the optimizer uses the hardcoded default estimates for the fact table ft
(100,000,000 rows) and the dimension tables d1
and d2
(100,000 rows).
Because the two dimension tables are estimated to be equally large, the join order in this case is fully determined by the number of local predicates.
Applying two local predicates on d1
is estimated to produce a smaller estimated row count (20000 rows) than applying one local predicate on d2
(70000 rows).
This join order will remain stable even if the underlying table sizes change.
Limiting max insert threads
Controls the maximum number of threads forINSERT
statements, limiting the degree of parallelism for tablet writing operations.
This can reduce memory footprint during ingestion.
Syntax
Example
Limit insert write operations to a single thread so that only one tablet is written at a time, reducing memory usage by avoiding multiple concurrent buffers:Using Iceberg partitioned scans for co-located joins and aggregations
Theenable_iceberg_partitioned_scan
setting controls whether to scan partitioned Iceberg tables in a partitioned manner on multi-node engines. When enabled, Firebolt assigns entire partitions of the tables to nodes, allowing joins and aggregations on the tables’ partition keys to be run fully locally. For joins, this requires compatible partitioning schemes on both sides of the join. Note that enabling this setting cause all partinioned Iceberg tables to be scanned in a partitioned manner, even if no joins or aggregations eligible for co-located execution are present in the query.
Caution: when partition sizes are imbalanced or the number of partitions is smaller than the number of nodes, enabling this setting can cause significant load imbalance between the nodes, degrading query performance.
Example
In the following example, we join two Iceberg tables partitioned bymonth("timestamp")
. The join is eligible for co-located execution because both tables are partitioned by the same key. Note how with enable_iceberg_partitioned_scan
set to true
, the iceberg file listing is shuffled by the tables’ partition values, but no further shuffling is performed before the join. With the setting disabled, the iceberg file listing is shuffled by a weighted partitioning of the file sizes to make sure that each node is assigned approximately the same amount of data to scan, ignoring the partition values. This requires an additional hash shuffle of both tables before the join, which can be expensive for very large tables. Enabling partitioned scans allows this shuffle to be avoided.
type
. We aggregate on type
and see that the optimizer uses partitioned scans to avoid shuffling the data.