Integrate Estuary Flow with Firebolt
Estuary Flow is a real-time data integration platform designed to streamline the movement and transformation of data between diverse sources and destinations. It provides an event-driven architecture and a user-friendly interface for building pipelines with minimal effort. You can use Flow to set up pipelines to load data from various sources, such as cloud storage and databases, into Firebolt’s cloud data warehouse for low-latency analytics.
This guide shows you how to set up a Flow pipeline that automatically moves data from your Amazon S3 bucket to your Firebolt database using the Estuary Flow user interface (UI). You must have access to an Estuary Flow account, an Amazon S3 bucket, and a Firebolt service account.
Topics:
- Prerequisites
- Configure your Estuary Flow source
- Configure your Estuary Flow destination
- Monitor your materialization
- Validate your materialization
- Additional resources
Prerequisites
- Estuary Flow account – You must have access to an active Estuary Flow account. If you do not have access, you can sign up with Estuary.
- Amazon S3 bucket – you must have access to the following:
- An AWS Access Key ID and AWS Secret Access Key for an Amazon S3 bucket.
- The name and path to an Amazon S3 bucket that contains your data.
- Firebolt service account –
- Access to an organization in Firebolt. If you don’t have access, you can create an organization.
- Access to a Firebolt database and engine. If you don’t have access, you can create a database and create an engine.
- Access to a Firebolt service account, which is used for programmatic access, its service account ID and secret. If you don’t have access, you can create a service account.
Configure your Estuary Flow source
To set up an Estuary Flow pipeline that automatically moves data from your Amazon S3 bucket, you must create a capture that defines how and where data should be collected. Create a capture for the Estuary Flow source as follows:
- Sign in to your Estuary Flow Dashboard.
- Select Sources from the left navigation pane.
- In the Sources window, select + NEW CAPTURE.
- From the list of available connectors, navigate to Amazon S3, and select Capture.
- Under Capture Details, enter a descriptive name for your capture in the text box under Name.
- Under Endpoint Config, enter the following:
- AWS Access Key ID – The AWS account ID associated with the Amazon S3 bucket containing your data.
- AWS Secret Access Key – The AWS secret access key associated with the Amazon S3 bucket containing your data.
- AWS Region – The AWS region that contains your Amazon S3 bucket. For example:
us-east-1
. - Bucket – The name of your Amazon S3 bucket. For example,
firebolt-publishing-public
. - Prefix (Optional) – A folder or key prefix to restrict the data to a specific path within the bucket. An example prefix structure follows:
/help_center_assets/firebolt_sample_dataset/levels.csv
. - Match Keys (Optional) – Use a filter to include only specific object keys under the prefix, narrowing the capture’s scope.
- Select the NEXT button in the upper-right corner of the page.
- Test and save your connection as follows:
- Select TEST in the upper-right corner of the page. Estuary will run a test for your capture and display Success if it completes successfully.
- Select CLOSE in the bottom-right corner of the page.
- Select the SAVE AND PUBLISH button in the upper-right corner of the page. Estuary will test, save, and publish your capture and display Success if it completes successfully.
- Select CLOSE in the bottom-right corner of the page.
Configure your Estuary Flow destination
To set up an Estuary Flow pipeline that automatically moves data from your Amazon S3 bucket, you must create a materialization that defines how the data should appear in the destination system, including any schema or transformation logic. Create a materialization for the Estuary Flow destination as follows:
- Select Destinations from the left navigation pane.
- Select the + NEW MATERIALIZATION button in the upper-left corner of the page.
- Navigate to the Firebolt connector and select Materialization.
- Under Materialization Details, enter a descriptive name for your materialization in the text box under Name.
- Under Endpoint Config, enter the following:
- Client ID – The service account ID for your Firebolt service account.
- Client Secret – The secret for your Firebolt service account.
- Account Name – The name of your service account.
- Database – The name of the Firebolt database where you want to put your data. For example,
my-database
. - Engine Name – The name of the Firebolt engine to run the queries. For example:
my-engine-name
. - S3 Bucket – The name of the Amazon S3 bucket to store temporary intermediate files related to the operation of the external table. For example,
my-bucket
. - S3 Prefix – (Optional) A folder or key prefix to restrict the data to a specific path within the bucket. An example prefix structure follows the format in:
temp_files/
. - AWS Key ID – The access key ID for the AWS account linked to the Amazon S3 bucket for temporary file storage.
- AWS Secret Key – The AWS secret key associated with the Amazon S3 bucket to store temporary files.
- AWS Region – The AWS region of your Amazon S3 bucket. For example:
us-east-1
.
- Select the NEXT button in the upper-right corner of the page.
- Under Source Collections, do the following:
- Select Source From Capture.
- In the Captures window, select the checkbox next to the Amazon S3 source you specified when you configured your Estuary Flow source.
- Select the CONTINUE button in the bottom-right corner of the page.
- Verify that the Table name and type in the CONFIG tab under Resource Configuration are correct, and update if necessary.
- (Optional) Choose Refresh next to Field Selection to preview the fields, their types, and actions that will be written to Firebolt.
- Test and save your materialization as follows:
- Select the TEST button in the upper-right corner of the page. Estuary will run a test for your materialization and display Success if it completes successfully.
- Select CLOSE in the bottom-right corner of the page.
- Select the SAVE AND PUBLISH button in the upper-right corner of the page. Estuary will test, save, and publish your materialization and display Success if it completes successfully.
- Select CLOSE in the bottom-right corner of the page.
Monitor your materialization
You can monitor your new data pipeline in Estuary Flow’s dashboard as follows:
- Select Destinations from the left navigation pane.
- Select your newly created materialization to view a dashboard with the following tabs:
- OVERVIEW – Provides a high-level summary of the materialization that includes throughput over time.
- SPEC – Displays the configurations and specifications of the materialization that includes schema mapping from the source to destination, the configuration of the destination, and any filters or constrains on the materialized data.
- LOGS – Provides records of materialization activity including success and failure events, messages, and errors.
Ensure that your data is being ingested and transferred as expected.
Validate your materialization
You can validate that your data has arrived at Firebolt as follows:
- Log in to the Firebolt Workspace.
- Select the Develop icon () from the left navigation pane.
- In the Script Editor, run a query on the table that you specified as an Estuary Flow destination to confirm the transfer of data as follows:
- Select the name of the database that you specified as your Estuary Flow destination from the drop-down list next to Databases.
- Enter a script in the script editor to query the table that you specified as an Estuary Flow destination. The following code example returns the contents of all rows and all columns from the
games
table:
SELECT * FROM games
You’ve successfully set up an Estuary Flow pipeline to move data from an Amazon S3 source to a Firebolt destination. Next, explore the following resources to continue expanding your knowledge base.
Additional resources
- Explore the core concepts of Estuary Flow.
- Access tutorials for Estuary Flow including a tutorial on data transformation.
- Learn more about Estuary Flow’s command line interface.