Menu
Grafana Cloud

Configure Logs with Firehose

Logs with Firehose leverages Amazon Data Firehose and a minimal infrastructure to deliver logs to the ingestion pipeline within Grafana Cloud.

Before you begin

You need the following information to complete configuration successfully:

  • Target endpoint URL: The correct endpoint for your Grafana Cloud instance
  • Loki User: The numeric value of the User

To obtain these items, perform the following steps.

  1. Navigate to your Grafana Cloud portal.
  2. Select your Grafana Cloud stack.
  3. Locate the Loki tile, and click Details.
  4. Copy and save the values in the URL field (which is the target endpoint) and in the User field for use in future steps.

Set up required authentication

When Grafana Cloud receives logs from AWS, the access policy and its associated authentication token enables Grafana to:

  • Authenticate the request.
  • Determine which customer the data is coming from so Grafana can store it appropriately in Loki.

Complete the following to create an access policy:

  1. At your Grafana Cloud stack, from the main menu under SECURITY, click Access Policies.
  2. Click Create access policy.
  3. In the Display name box, create a display name to appear in the access policies list.
  4. In the Realms box, select the first option, “your_org_name (all stacks)”.
  5. In the Scopes section, for logs select Write to allow logs to write to your account.
  6. Click Create to create the access policy.
  7. In the access policy you just created, click Add token to create a token for Data Firehose.
  8. Enter a name for the token, and click Create.
  9. Click Copy to clipboard and save the new token. In a future step, you replace LOGS_WRITE_TOKEN with this token.

Set up on AWS account

You must create some infrastructure in your AWS account for sending the logs to Grafana Cloud:

  • Authentication components
  • A component to route logs into the delivery stream
  • The Data Firehose delivery stream
Infrastructure in AWS account
Infrastructure in AWS account

You can accomplish this with Terraform or CloudFormation.

Set up with Terraform

  1. Download the Terraform snippet file.

  2. Open the snippet file, and complete the sections labeled with FILLME as shown in the following example:

    terraform
    provider "aws" {
    // FILLME: AWS region
    region = "us-east-2"
    
    // FILLME: local AWS profile to use
    profile = "test-profile"
    }
  3. Run terraform apply, including the required variables as shown in the following example:

    bash
    terraform apply \
        -var="fallback_bucket_name=<Name for an s3 bucket, to save data in case of failures>" \
        -var="firehose_stream_name=<Kinesis stream name>" \
        -var="target_endpoint=<Target AWS Logs endpoint provided by Grafana>" \
        -var="logs_instance_id=<Loki User>" \
        -var="logs_w
    • fallback_bucket_name: The name of an S3 bucket where logs can be stored in case the delivery fails
    • firehose_stream_name: Enter a meaningful name for the Data Firehose stream
    • target_endpoint: The target endpoint URL for your Grafana Cloud instance. Add the prefix aws-. For example, if your Loki URL is https://logs-prod3.grafana.net, then your Logs with Firehose URL will be https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push.
    • logs_instance_id: The numeric value for User field of the Loki data source
    • logs_write_token: The token you created for Data Firehose

Set up with CloudFormation

  1. Download the CloudFormation snippet file.

  2. Run the following aws cloudformation create-stack, including the required variables:

    bash
    aws cloudformation create-stack --stack-name grafana-aws-logs-firehose --template-body file://./aws-logs-firehose.yaml --capabilities CAPABILITY_NAMED_IAM \
    --parameters \
        ParameterKey=FallbackS3BucketName,ParameterValue=aws-logs-fallback \
        ParameterKey=FirehoseStreamName,ParameterValue=grafana-aws-logs \
        ParameterKey=TargetEndpoint,ParameterValue=TARGET_ENDPOINT \
        ParameterKey=LogsInstanceID,ParameterValue=LOKI_USER \
        ParameterKey=LogsWriteToken,ParameterValue=LOGS_WRITE_TOKEN
    • FallbackS3BucketName: The name of an S3 bucket where logs can be stored in case the delivery fails
    • FirehoseStreamName: Enter a meaningful name for the Data Firehose stream
    • TargetEndpoint: The target endpoint URL for your Grafana Cloud instance. Add the prefix aws-. For example, if your Loki URL is https://logs-prod3.grafana.net, then your Logs with Firehose URL will be https://aws-logs-prod3.grafana.net/aws-logs/api/v1/push.
    • LogsInstanceID: The numeric value for User field of the Loki data source
    • LogsWriteToken: The token you created for Data Firehose

Set up CloudWatch subscription filter

The CloudWatch subscription filter:

  • Reads logs from a select CloudWatch log group.
  • Optionally filters the logs.
  • Sends the logs to the Data Firehose stream.

Configure with AWS CLI

Use the following example to create a subscription filter with the AWS CLI:

bash
aws logs put-subscription-filter \
  --log-group-name "<log group name to send logs from>" \
  --filter-name "<Name of the subscription filters>" \
  --filter-pattern "<Optional filter expression>" \
  --destination-arn "<ARN of the Kinesis firehose stream created above>" \
  --role-arn "<ARN of the IAM role created for sending logs above>"

Configure with Terraform

Use the following example to configure with Terraform, and include the required variables:

terraform
resource "aws_cloudwatch_log_subscription_filter" "filter" {
  name            = "filter_name"
  role_arn        = aws_iam_role.logs.arn
  log_group_name  = "/aws/lambda/example_lambda_name"
  filter_pattern  = "" // Optional: Filter expression
  destination_arn = aws_kinesis_firehose_delivery_stream.main.arn
  distribution    = "ByLogStream"
}
  • name: Enter a meaningful name for the ARN
  • role_arn: ARN of the IAM role created in previous step for sending logs
  • log_group_name: The log group where the logs should be sent from
  • filter_pattern: An optional filter expression
  • destination_arn: Using ARN of Firehose delivery stream created in previous snippet

Configure with CloudFormation

Use the following example to configure with CloudFormation, and include the required variables:

bash
SubscriptionFilter:
  Type: AWS::Logs::SubscriptionFilter
  Properties:
    DestinationArn: "<Firehose delivery stream ARN>"
    FilterPattern: ""
    LogGroupName: "/aws/lambda/test-lambda"
    RoleArn: "<IAM Role for sending logs ARN, created in the steps above>"
  • FilterPattern: An optional filter expression
  • LogGroupName: The log group the logs should be sent from
  • DestinationArn: ARN of the Data Firehose delivery stream, created in the previous steps
  • RoleArn: ARN of the IAM role for sending logs, created in the previous steps

Custom static labels

You can use the X-Amz-Firehose-Common-Attributes header to set extra static labels. You can configure the header in the Parameters section of the Amazon Data Firehose delivery stream configuration. Label names must be prefixed with lbl_. Label names and label values must be compatible with the Prometheus data model specification.

The following JSON is an example of a valid X-Amz-Firehose-Common-Attributes header value with two custom labels:

json
{
  "commonAttributes": {
    "lbl_label1": "value1",
    "lbl_label2": "value2"
  }
}

Next steps

To verify that AWS is sending logs to your Grafana instance, log in to your instance and use query this LogQL query:

LogQL
{job="cloud/aws"}