Amazon Managed Prometheus Integration (New/Beta)

Overview

AMP is an AWS-managed, PromQL-compatible metrics service that aggregates metrics from multiple Kubernetes clusters into one or more AMP workspaces. Each workspace exposes a single API endpoint that returns Prometheus-format metrics, making all cluster metrics accessible from a single location. Because AMP centralizes all cluster data behind a unified endpoint, Finout can retrieve everything through a single connection, eliminating the need to query each cluster individually. This makes AMP an efficient monitoring option for large or multi-cluster environments.

circle-info

Note: The Prometheus onboarding flow has been revamped to consider Centralized Monitoring tools and allow you to control which Cloud cost centers are enriched. To use this flow, contact Support at [email protected].

AMP Setup at a Glance

circle-check
  1. Set Collection Method: Name your integration and select your metrics collection method.

  2. Connect S3 Bucket: The destination for Kubernetes metrics.

  3. Set cronjob permissions: This allows the cronjob to write the metrics into your S3 bucket.

  4. Configure the create cronjob YAML using the template, and deploy it in the cluster.

    1. Select the authentication method.

    2. Set the required environment variables in the YAML and make additional adjustments, then deploy it in your cluster.

  5. Select cost centers: This enriches with the integrated Kubernetes metrics.

  6. Validate your Kubernetes Integration: Ensure that the Prometheus metrics are exported correctly to S3.

What happens next: Finout creates a centralized Prometheus Cost Center based on your AMP workspace and begins enrichment. Kubernetes cost data typically appears in Finout within about 2 days, reflecting standard cloud billing delays.

circle-info

Note: If you have multiple AMP workspaces, repeat the following process for each one (Prerequisite Validation Step 3 (Set CronJob Permissions) and Step 4 (Configure and Create the Cronjob). This means that you will deploy one CronJob per workspace. For Prometheus metrics collected from your Kubernetes clusters, refer to the Prometheus Overview.

circle-exclamation

1. Set Collection Method

  1. In Finout, navigate to Settings > Cost Centers > Prometheus (Kubernetes). The Set Collection Method step appears.

  2. Name your Prometheus integration.

  3. Set your metrics collection method: Select Centralized Prometheus Monitoring Tool.

  4. Select AMP.

    circle-info

    Note: This configuration will affect the following steps; adjust them accordingly.

  5. Click Next. The Connect S3 Bucket step appears.

2. Connect S3 Bucket

  1. In Finout, navigate to Settings > Cost Centers > Kubernetes. The Connect S3 Bucket step appears.

  2. Connect a S3 bucket that contains the Kubernetes metrics collected from Prometheus using this integration. Finout is granted read-only permissions to access the data. Ensure that you have already connected an S3 bucket to Finout for the AWS Cost and Usage Report (CUR). You can reuse the same S3 bucket and IAM role, and Finout will automatically populate the Role ARN, Bucket name, and External ID fields in the console. If you want to use a different S3 bucket or haven’t configured one yet, follow the steps in Grant Finout Access to an S3 Bucket. Fill in the following fields:

    1. External ID - This is taken from your existing AWS Cost Center and is populated by default. Use this same External ID in the IAM role’s trust policy to grant Finout permissions to read from the S3 bucket that stores your Prometheus metrics.

    2. ARN Role - Provide the ARN of the IAM role that grants Finout read-only access to this S3 bucket. When creating or updating this role, make sure you use the External ID from the Finout console in the role’s trust policy.

    3. Bucket Name - Enter the name of the S3 bucket that stores your Prometheus metrics. Use the bucket name only (no s3:// and no path). It must be in the Region you selected and readable by the Role ARN.

    4. Region - AWS region of the bucket (e.g., us-east-1). Must match the bucket’s actual region.

  3. Click Next.

circle-exclamation

3. Set CronJob Permissions

In this step, you will grant the cronjob permissions to write the Kubernetes metrics into the bucket you configured in the previous step.

  1. Use the following updated policy (includes AMP API access permissions):

  2. Use kube-state-metrics version 2.0.0 or later. If your cluster uses the prometheus-kube-state-metrics DaemonSet, add the flag below so kube-state-metrics exports all required labels to your Prometheus endpoint (you can adjust the pattern to match your setup):

    --metric-labels-allowlist=pods=[*],nodes=[*] ​Do this by adding an arg to the kube-state-metrics container, for example:

  3. Click Next.

4. Configure and Create the CronJob

  1. Identify your authentication method: Select the authentication method you use to access the API from the options listed below. After selecting your method, you’ll add the corresponding environment variables and their values to the CronJob YAML in the next step.

    • AWS SigV4 authentication - This AMP's predefined authentication method.

  2. Copy the CronJob configuration below to a file (for example, cronjob.yaml), make sure to include the relevant authentication environment variable you selected at the previous step:

Example YAML:

  • This is an example of a CronJob that schedules a Job every 30 minutes.

  • The job queries Prometheus with a 5-second delay between queries to prevent overloading your Prometheus stack.

  1. Modify the suggested YAML file above, if needed.

AMP YAML Environment Variables:

Category
Variable
Description
Required/Optional
Default Value
Notes

Scope & Multi-Cluster Behavior

CLUSTER_LABEL_NAME

The cluster label name defines the label whose values represent cluster names in your metrics , allowing Finout to identify and group metrics by cluster from the single metrics endpoint.

Required

cluster

The value is "cluster" by default.

Scope & Multi-Cluster Behavior

CLUSTER_NAME

The cluster name defines the folder where metrics are stored in S3.

Required

None

Cluster names are extracted directly from the metric data. However, this environment variable is still required and determines the folder name under which all centralized metrics will be temporarily stored. It is recommendedto set it to the name of the cluster where the cronjob is deployed.

Scope & Multi-Cluster Behavior

AMP_REGION

The region of theAMP workspace.

Required

None

.....

Scope & Multi-Cluster Behavior

PROMETHUES_BACKEND

Set "amp" to work with Amazon Managded Promethues.

Required

None

.....

Storage & Paths

S3_BUCKET

Customer’s S3 bucket to store the exported metrics.

Required

None

Must exist in customer’s environment.

Storage & Paths

S3_PREFIX

S3 prefix where metrics will be stored.

Required

None

For example, k8s/prometheus

has to be the same s3_prefix if multiple per cluster integrations within the same cost center config.

Endpoint & Connectivity

PATH_PREFIX

Optional sub-path between host/port and the Prometheus metrics API.

Optional

None

-----

Auth & Identity

ROLE_ARN

ARN for IAM role to assume for authorization.

Optional

None

Used if assuming a role instead of direct access.

Auth & Identity

ROLE_EXTERNAL_ID

External ID for assumed role.

Optional

None

Only needed if the IAM role requires an external ID.

Endpoint & Connectivity

HOSTNAME

The external AMP API endpoint.

Required

None

Must be reachable from the pod running the Finout exporter cronjob. Example: “https://aps-workspaces.{AMP_REGION}.amazonaws.com/workspaces/{AMP_WORKSPACE_ID}"

Query Window & Data Volume

TIME_FRAME

Time range per query in seconds.

Optional

3600

Lower values reduce query load and risk of OOM.

Query Window & Data Volume

BACKFILL_DAYS

Days of historical data to fetch on first run.

Optional

3d

Large values increase load and risk of slow queries.

Query Window & Data Volume

INITIAL_BACKFILL_DATE

Defines the initial backfill start date

Optional

None

  • Available starting from the Exporter’s v2.0.0

  • value should be in YYYY-MM-DD format.

  • limited to maximum 3 days.

Query Window & Data Volume

QUERIES_BATCH_SIZE

Controls how many queries are processed per batch to manage memory usage.

Optional

5

  • Available starting from the Exporter’s v2.0.0

  • Default is 5. Lowering it reduces memory usage but may increase overall runtime.

  • Minimum is 1

  • If a customer sees OOM kills / memory pressure, recommend reducing QUERIES_BATCH_SIZE before changing anything else.

To configure these fields, add them to the configuration file under the env section and use the name/value format for each desired field.

For example:

Ensure that the field names and the corresponding values are correctly specified to apply the desired configuration.

  1. Run the command in a namespace of your choice, preferably the one where the Prometheus stack is deployed:

  2. Trigger the job (instead of waiting for it to start):

  3. Click Save. Result: The Prometheus cost center is created. The job automatically fetches Prometheus data from 3 days ago up to the current time.

    circle-info

    Note: This can be changed by modifying your BACKFILL_DAYS variable.

5. Select Cost Centers

  1. Select the cost centers that this integration will enrich with Kubernetes metrics. These cost centers can then attribute Kubernetes costs for supported services (EKS, GKE, and AKS) by namespace, workload, and label.

  2. Click Complete Integration. The cost center will be created in about 48 hours.

    circle-info

    Note:

    • If this cost center is already enriched by a Kubernetes integration, ensure there are no overlapping nodes between the integrations to avoid duplicated costs and resources.

    • You can hover over every created cloud cost center and see all the Kubernetes cost centers that enrich it.

6. Validate Your Kubernetes Integration

Confirm that your Prometheus Integration is working correctly:

S3 Validation

To confirm that Prometheus metrics are being exported correctly to your S3 bucket:

  1. Navigate to the S3 path, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/ You should see a list of metric folders, such as: metric=cpu_requests/

  2. Open a metric folder and verify that .json.gz files are uploaded. Each file should have a timestamp prefix, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gz If these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.

Data Availability

Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.

circle-info

Note: If any issues occur, share your exporter logs with Finout support at [email protected] for further investigation.

For more information, please see the FAQs and Troubleshooting section.

For new updates to the Prometheus Exporter introduced between versions and their impact on cost calculation, refer to the release notes.

Last updated

Was this helpful?