Invokes an Amazon AWS Bedrock model and returns the raw response payload as a JSON string (TEXT). To use the function, provide the model identifier, the request body as serialized JSON, and a LOCATION that holds AWS credentials. For setup guidance and end-to-end examples, see Getting started with AI.

Syntax

AWS_BEDROCK_AI_QUERY(<model>, <request>, <location> [, <null_on_error>])

Parameters

ParameterDescriptionSupported input types
<model>The model to invoke. Use a Bedrock model identifier (for example, 'amazon.nova-lite-v1:0', 'meta.llama3-3-70b-instruct-v1:0'). The value is forwarded to Bedrock without validation.TEXT
<request>The request body as a serialized JSON string. The structure and fields must follow the syntax for the selected model.TEXT
<location>The name of the LOCATION to use for AWS credentials. Must be a literal constant. See CREATE LOCATION (Amazon Bedrock).TEXT
<null_on_error>Optional. Whether to return NULL instead of raising an error when a Bedrock invocation error occurs. Default FALSE. Must be a literal constant.BOOL
Make sure the request JSON matches the syntax required by the model you select. See the Amazon Bedrock model parameters documentation for details.

Return Type

TEXT
  • Returns the raw Bedrock response payload as a JSON string.
  • If <request> is NULL, the function returns NULL.

LLM Token Budget

The daily LLM token budget for each account is governed by the ALTER ACCOUNT SET LLM_TOKEN_BUDGET command. If your account exceeds its allotted token budget, invocations of AWS_BEDROCK_AI_QUERY will fail until the budget is increased or the daily limit resets. The current limit and daily usage of the LLM token budget can be viewed in information_schema.quotas. Counting tokens is done in a “best effort” manner. Some models provide the token count in the response, while others don’t. For those that don’t Firebolt estimates the token count. Supported model list:
Model ID substringToken count support
amazon.novaAccurate
amazon.titanAccurate
anthropic.claudeAccurate
cohere.commandEstimated
cohere-command-rAccurate
deepseekEstimated
meta.llamaAccurate
mistralEstimated
LLM token budget accounting is not available in Firebolt Core.

Examples

Create a LOCATION with role ARN

CREATE LOCATION my_bedrock_location WITH
    SOURCE = 'AMAZON_BEDROCK'
    CREDENTIALS = (AWS_ROLE_ARN = '<aws_role_arn>');
For details on creating a Bedrock LOCATION, see CREATE LOCATION (Amazon Bedrock).

Invoke a model using the LOCATION

SELECT AWS_BEDROCK_AI_QUERY(
    'amazon.nova-micro-v1:0',
    $${"schemaVersion": "messages-v1", "messages": [{"role": "user","content": [{"text": "What is the company name that aws belongs to?"}]}],"system": [{"text": "Fulfill the user's request"}],"inferenceConfig": {}}$$,
    'my_bedrock_location') AS result;
Returns (example shape):
{"output":{"message":{"content":[{"text":"Amazon Web Services is part of Amazon."}],"role":"assistant"}},"stopReason":"end_turn","usage":{"inputTokens":18,"outputTokens":42,"totalTokens":60}}

Invoking the LLM on multiple values

SELECT n AS number,
       JSON_POINTER_EXTRACT_TEXT(
         AWS_BEDROCK_AI_QUERY(
           'amazon.nova-micro-v1:0',
           $${"schemaVersion": "messages-v1", "messages": [{"role": "user","content": [{"text": "$$ || n::TEXT || $$"}]}],"system": [{"text": "Write in english the capitalized name of the number that the user writes. Respond with a single word."}],"inferenceConfig": {}}$$,
           'my_bedrock_location'
         ),
         '/output/message/content/0/text'
       ) AS processed
FROM generate_series(1,3) s(n);
Returns
numberprocessed
1’ONE’
2’TWO’
3’THREE’

Sentiment analysis

SELECT JSON_POINTER_EXTRACT_TEXT(
  AWS_BEDROCK_AI_QUERY(
    'amazon.nova-micro-v1:0',
    $${"schemaVersion": "messages-v1", "messages": [{"role": "user","content": [{"text": "I love Firebolt."}]}],"system": [{"text": "Identify the sentiment in the user's sentence as a single uppercase word: POSITIVE or NEGATIVE. Respond with a single word."}],"inferenceConfig": {}}$$,
    'my_bedrock_location'
  ),
  '/output/message/content/0/text'
);
Returns
'POSITIVE'

Check your LLM token quota usage

SELECT * FROM account_db.information_schema.quotas;
Look for the LLM_TOKEN_BUDGET row to view current usage and limits.