EXPLAIN
feature analyzes and displays how Firebolt’s query processing system runs your query. You can use this information to understand resource utilization, and identify opportunities to improve and optimize your query performance.
If you specify the ANALYZE
option for EXPLAIN
, Firebolt also collects detailed metrics about each operator during query runtime, including the amount of time spent on the operator, and how much data it processes.
Parameter | Description |
---|---|
option_name | The name of an option. See below for a list of all available options. |
option_value | The value of the option. If no value is specified, the default is TRUE . |
<statement> | Any statement. |
EXPLAIN
by specifying options. The following table lists all available options and their functionalities:
Option Name | Option Values | Description |
---|---|---|
LOGICAL | TRUE , FALSE | Returns the optimized logical query plan by default, unless otherwise specified. |
PHYSICAL | TRUE , FALSE | Returns an optimized physical query plan containing shuffle operators for queries on distributed engines, showing how work is distributed between nodes of an engine. |
ANALYZE | TRUE , FALSE | Returns an optimized physical query plan annotated with metrics from query execution. For more information about these metrics, see Example with ANALYZE. |
ALL | TRUE , FALSE | Returns all of the previous LOGICAL , PHYSICAL , and ANALYZE plans. Use the following sample syntax: EXPLAIN (ALL) <select statement> . |
STATISTICS | TRUE , FALSE | Returns an annotated query plan that includes estimates from Firebolt’s query optimizer. This option works with LOGICAL , PHYSICAL , ANALYZE , and ALL . Use the following sample syntax: EXPLAIN (STATISTICS) <select statement> . |
FORMAT | TEXT , JSON | Changes the result format of the EXPLAIN command. TEXT is optimized for humans and LLMs. JSON makes it easy to work with the plans programmatically. |
LOGICAL
, PHYSICAL
, ANALYZE
, and ALL
. If you need to view several plans at once, use the ALL
option.
4.26.0
, on a two-node M
storage-optimized engine.
The underlying dataset is TPC-H at scale factor 100. The lineitem
table contains 77GiB of uncompressed data.EXPLAIN
statement for a SELECT
query on a table named lineitem
.
TEXT
format.
We constantly optimize how easy the TEXT
format is to understand for humans and LLMs.
When writing code against EXPLAIN
results, use the much more stable JSON
format.EXPLAIN
output shows the sequence of operations that Firebolt’s engine will use to run the query:
lineitem
table.lineitem
table by the WHERE
conditions.GROUP BY
clause.ORDER BY
clause.SELECT
clause.(ALL)
option, which is the same as specifying (ALL TRUE)
or (LOGICAL, PHYSICAL, ANALYZE)
.
The query returns three columns explain_logical
, explain_physical
, and explain_analyze
.
Each column can be toggled on or off using the corresponding options.
For example, to display only explain_logical
and explain_analyze
, you can specify either (LOGICAL, ANALYZE)
or (ALL, PHYSICAL FALSE)
.
With the ANALYZE
option enabled, this query took 2.5 seconds to run.
Without the ANALYZE
option, the query is only planned and should return the result almost immediately.
LOGICAL
plan shows the optimized logical query plan of the SELECT
statement.
Additionally, it shows the output row type for each operator.
For operators with multiple input operators such as the Join
operator, input columns are concatenated.
For example, if the first input produces three columns, and the second input produces four columns,
the join produces seven columns.PHYSICAL
plan shows a detailed, optimized physical plan that includes Shuffle operators for distributed query execution, offering insights into how tasks are distributed across the engine’s nodes.
A Shuffle
operator redistributes data across engine nodes.
For example, scans of FACT
tables, like operator [10] [TableFuncScan]
, and the operators following it are automatically distributed across all nodes of an engine.
A Shuffle
operator of type Hash
indicates that the workload of the following operators is distributed across all nodes, while a Shuffle
operator of type Gather
consolidates data onto a single node of the engine.
In the previous example, only the operator [2] [SortMerge]
runs on a single node, merging the sorted partial query results from all other nodes.
The MaybeCache
operator, at the top of the plan, caches its input in main memory on a best-effort basis for future runs of the same (sub-) query.
It considers the entire plan leading to its input, as well as the state of the scanned tables.
If the data in a table changes, the MaybeCache
operator will know not to read an outdated cached entry.
It may also skip caching large results or prioritize caching results that offer greater time savings.
The MaybeCache
operator may appear in different places of a plan, but in this query it caches the full query result.
For more information about the MaybeCache
operator, see Subresult Reuse in Firebolt.ANALYZE
plan contains the same plan shape as the PHYSICAL
explain, but annotated with metrics collected during query execution.
For each operator, it shows the number of rows it produced in output_cardinality
, and how much time was spent on that operator in thread_time
and cpu_time
.
The thread_time
is the sum of the wall-clock time that threads spent working on the operator across all nodes, while cpu_time
is the total time those threads were scheduled on a CPU core.
A significantly smaller cpu_time
compared to thread_time
suggests that the operator is either waiting on input or output operations or that the engine is handling multiple queries simultaneously.
For TableFuncScan
operators that read from managed tables, additional metrics may be displayed:
granules
shows the output granules (not pruned) out of the total number of granules that were considered.
This helps you understand the effectiveness of primary index pruning in reducing the amount of data scanned during query execution.
In the example above, granules: 80577/80577
indicates that no granules were pruned by the primary index, meaning all 80,577 granules were scanned.cardinality_before_prewhere
measures the number of rows selected for scanning after tablet and granule pruning.
After this static pruning step, prewhere optimizes scanning by reading columns with the most selective predicates first.
By evaluating these predicates early, the scan can perform a second pruning pass, excluding additional granules from being read for the remaining columns.
If the ratio cardinality_before_prewhere / output_cardinality
is large, it indicates that prewhere was effective in eliminating additional rows before the main scan.cpu_time
is almost as high as the thread_time
on the TableFuncScan
node. This indicates that the data of the lineitem
table was in the SSD cache.
On a cold query run, where data must be retrieved from an Amazon S3 bucket, thread_time
is considerably higher than cpu_time
for the same operator:
STATISTICS
option to display the logical and physical query plan for the SELECT
statement, annotated with estimated statistics from Firebolt’s query optimizer:
Each plan node is annotated with a [Logical Profile]
, indicating the source
of the estimation for that node as well as the row count estimate.
Source | Description |
---|---|
hardcoded | This node received hard-coded defaults because no additional information is available, a scenario that typically occurs only with external tables. |
estimated | This node’s profile was computed from its child nodes’ profiles, based on assumptions about the data that may not always hold true. These assumptions can introduce inaccuracies, which may be further amplified. |
metadata | This node’s profile was constructed from available metadata, ensuring that the information is either precise or highly accurate. |
history | This node’s profile was obtained from a previously recorded run, and is accurate unless data has changed since the recording. |
learned | This node’s profile was predicted using machine learning. |
FORMAT
option to query telemetry in different ways.
The TEXT
format makes it easy to understand telemetry for humans and LLMs.
As options are fully composable, it can be combined with other options such as ANALYZE
.
INNER JOIN
(776ms thread time). After that, the concat
projection is the second most expensive operation (286ms thread time).MATERIALIZED
CTE led to the planner adding the Loopback
shuffle.
This way, the result of Node [6]
can be consumed multiple times.
The Recurring Node
at the bottom of the plan shows the second time mat_cte
gets consumed.LIMIT
on top.
Only around 590k rows make it out of the projection in Node [5]
.JSON
telemetry is easy:
JSON
format contains telemetry for all operators under the operators
array.operator_id
.input_ids
attached to each node.
The DAG’s root operator id is stored in the top-level root_operator_id
.annotations
. When ANALYZE
is selected as an option, these contain execution_metrics
.