Prerequisites
Before you begin, ensure that you have the following installed and configured:- A Kubernetes cluster (v1.19+)
kubectlcommand-line tool configured to access your clusterhelm(v3+) installed on your local machine
- Kubernetes nodes with a memlock limit of at least 8GB (you can check with
cat /proc/self/limits | grep -F 'Max locked memory') - possibility to run Core with the helper root container (
memlockSetup=truein Helm values)
memlockSetup=false so that there are no root containers in your deployments.
1: Create a Dedicated Namespace
To isolate Firebolt Core resources from other workloads in your cluster, create a dedicated Kubernetes namespace:2: Customize Helm Chart Values
The deployment is configured via Helm chart values. You can modify these by editing avalues.yaml file or by setting them directly via the --set flag.
For example, to deploy a 3-node cluster, ensure the nodesCount value is set to 3; the memory allocation and storage allocation should also be increased for any serious workload you are planning to run.
It is advised to not use the default preview-rc value for the image.tag and instead pick a released pinned version from the GHCR repository.
Refer to the Helm Chart README for the complete list of configurable parameters, including resource limits, storage options, and networking settings.
One of the most important values is useStatefulSet: when explicitly set to false it will cause the Helm chart to use 1 Kubernetes deployment per node, which facilitates operations like image updates and scaling up/down.
When useStatefulSet=true (the default), the Helm chart creates a headless service alongside the StatefulSet. This gives each pod a stable, fully-qualified domain name (FQDN) resolved via CoreDNS — the same service discovery pattern used by most databases on Kubernetes (PostgreSQL, Cassandra, Kafka, and others). When useStatefulSet=false, each node runs as a separate Deployment and is addressed through its own ClusterIP service.
3: Install the Helm Chart
Once your values are configured, install the chart into thefirebolt-core namespace:
core-demo you can use any name you prefer, it will be used as a prefix for the generated Kubernetes resources; the --generate-name flag can also be specified and it will let Helm automatically generate a release name.
4: Verify the Deployment
After installation, verify that the Firebolt Core pods are up and running:nodesCount value:
Running state, check their logs and events for troubleshooting:
5: Querying locally
You can interact with Firebolt Core locally by port-forwarding the HTTP interface port (default:3473) from one of the pods to your local machine. This enables you to send SQL queries directly via curl without exposing the service externally. To do this, run:
core-demo-firebolt-core-0 with the name of one of the running Firebolt Core pods.
Once forwarded, you can issue SQL queries using curl, for example:
6. Operations
Monitor IOPS usage on storage volumes to avoid slowdowns and latencies. For a full discussion of failure modes and recovery behavior, see Fault tolerance.Volume backup
No built-in backup mechanism is provided. The node 0 persistent volume is the only volume that always requires backup: it holds the SQLite metadata store containing the entire database catalog (schemas, table definitions, indexes). When object storage is configured, data volumes on non-zero nodes hold only a local cache of tablet data — the authoritative copy is in object storage, so those volumes do not need to be backed up separately. When object storage is not configured, data volumes on non-zero nodes are the only copy of their tablets. In this case, back up all node volumes, or ensure data can be fully re-ingested from the original source if a volume is lost. It is strongly recommended to configure regular snapshots or backups of the node 0 persistent volume using your storage infrastructure or Kubernetes-native tooling.A future version of Firebolt Core will manage metadata through PostgreSQL, which has its own backup and recovery ecosystem. At that point, dedicated volume-level backups for metadata will no longer be necessary.
Scaling up/down
When object storage is configured, scaling does not risk data loss — all table data remains in the object store. UpdatenodesCount in your Helm values and run helm upgrade (requires useStatefulSet=false); nodes restart and rehydrate from object storage.
When object storage is not configured, changing the number of nodes is only safe for clusters that are not using locally managed tables. For any other case it will be necessary to erase data volumes during the scaling operation, otherwise data inconsistency will occur.
Changing resources
You can runhelm upgrade to apply changes to your cluster after changing resources like memory or CPU allocation.
Note that it is not possible to resize storage volumes this way.
Updating Firebolt Core version
After changing theimage.tag value to a more recent pinned version you can run a helm upgrade to automatically roll out the change to your Firebolt Core cluster:
useStatefulSet=true, it will be necessary to delete all pods:
app.kubernetes.io/name=firebolt-core in the example above.
Removing the deployment
You can use:useStatefulSet=true the created PVCs (which contain data/metadata) will be left behind; you can list them using:
kubectl delete and make sure you are deleting only those whose name prefix matches your Helm deployment.
NOTE: this is a non-reversible operation, you must not delete the PVCs if you need the data/metadata of your Core cluster.
Additional Resources
- Kubernetes Namespaces documentation
- Helm Quickstart Guide