Amazon Managed Prometheus Integration (Beta)
Overview
AMP is an AWS-managed, PromQL-compatible metrics service that aggregates metrics from multiple Kubernetes clusters into one or more AMP workspaces. Each workspace exposes a single API endpoint that returns Prometheus-format metrics, making all cluster metrics accessible from a single location. Because AMP centralizes all cluster data behind a unified endpoint, Finout can retrieve everything through a single connection, eliminating the need to query each cluster individually. This makes AMP an efficient monitoring option for large or multi-cluster environments.
AMP Setup at a Glance
Prerequisite: Ensure your AMP workspace is already ingesting the required Prometheus metrics from all relevant Kubernetes clusters.
Connect your S3 bucket as the destination where the Finout Metrics Exporter CronJob will write Prometheus-format metrics retrieved from AMP.
Add the required IAM policy to your Kubernetes worker node role so the Finout Metrics Exporter CronJob can query AMP and write data to your S3 bucket.
Configure the CronJob YAML using the Finout Metrics Exporter template with the AMP backend.
Configure the required environment variables and make any needed adjustments, then deploy the CronJob in your cluster.
Validate your Kubernetes integration and ensure Prometheus metrics from AMP are being exported correctly to S3.
What happens next: Finout creates a centralized Prometheus Cost Center based on your AMP workspace and begins enrichment. Kubernetes cost data typically appears in Finout within about 2 days, reflecting standard cloud billing delays.
Important: To reduce AMP costs, the best practice is to have AMP scrape only the metrics required for Finout’s Kubernetes cost calculation, assuming this is your only monitoring use case. These metrics must be scraped at a one-minute sampling interval, as increasing the interval will reduce cost-allocation accuracy. The required metrics are:
Kube_node_info
Container_cpu_usage_seconds_total
Container_memory_working_set_bytes
kube_node_status_capacity{resource="cpu"}
kube_node_status_capacity{resource="memory"}
kube_pod_container_resource_requests (resource=cpu)
kube_pod_init_container_resource_requests (resource=cpu)
kube_pod_container_resource_requests (resource=memory)
kube_pod_init_container_resource_requests (resource=memory)
Container_network_receive_bytes_total
Container_network_transmit_bytes_total
Kube_pod_labels
Kube_node_labels
Kube_namespace_labels
Kube_pod_info
Kube_replicaset_owner
Kube_job_owner1. Connect an S3 Bucket

In Finout, navigate to Settings > Cost Centers > Kubernetes. The Connect S3 Bucket step appears.

Connect a S3 bucket that contains the Kubernetes metrics collected from Prometheus using this integration. Finout is granted read-only permissions to access the data. Ensure that you have already connected an S3 bucket to Finout for the AWS Cost and Usage Report (CUR). You can reuse the same S3 bucket and IAM role, and Finout will automatically populate the Role ARN, Bucket name, and External ID fields in the console. If you want to use a different S3 bucket or haven’t configured one yet, follow the steps in Grant Finout Access to an S3 Bucket. Fill in the following fields:
External ID - this is taken from your existing AWS Cost Center and is filled in by default. Use this same External ID in the IAM role’s trust policy to grant Finout permissions to read from the S3 bucket that stores your Prometheus metrics.
Cost Center - Select an AWS cost center account
ARN Role - Provide the ARN of the IAM role that grants Finout read-only access to this S3 bucket. When creating or updating this role, make sure you use the External ID from the Finout console in the role’s trust policy.
Bucket Name - Enter the name of the S3 bucket that stores your Prometheus metrics. Use the bucket name only (no
s3://and no path). It must be in the Region you selected and readable by the Role ARN.Region - AWS region of the bucket (e.g., us-east-1). Must match the bucket’s actual region.
Click Next.
Important: If you want to integrate Finout with more than one cluster, repeat Step 2 (Add the Kubernetes Worker Node Role Policy) and Step 3 (Create and Configure the CronJob) for each cluster. If the clusters belong to the same Cost Center, make sure they all use the same S3_PREFIX.
2. Add the Kubernetes Worker Node Role Policy
In this step, you will grant the cronjob permissions to write the Kubernetes metrics into the bucket you configured in the previous step.
Use the following updated policy (includes AMP API access permissions):
Use kube-state-metrics version 2.0.0 or later. If your cluster uses the prometheus-kube-state-metrics DaemonSet, add the flag below so kube-state-metrics exports all required labels to your Prometheus endpoint (you can adjust the pattern to match your setup):
--metric-labels-allowlist=pods=[*],nodes=[*]Do this by adding an arg to the kube-state-metrics container, for example:Click Next.
3. Create and Configure the CronJob

This is an example of a CronJob that schedules a Job every 30 minutes.
The job queries Prometheus with a 5-second delay between queries to prevent overloading your Prometheus stack.
Modify the suggested YAML file above, if needed.
AMP YAML Environment Variables:
Scope & Multi-Cluster Behavior
CLUSTER_LABEL_NAME
The cluster label name defines the label whose values represent cluster names in your metrics , allowing Finout to identify and group metrics by cluster from the single metrics endpoint.
Required
cluster
The value is "cluster" by default.
Scope & Multi-Cluster Behavior
CLUSTER_NAME
The cluster name defines the folder where metrics are stored in S3.
Required
None
Cluster names are extracted directly from the metric data. However, this environment variable is still required and determines the folder name under which all centralized metrics will be temporarily stored. It is recommendedto set it to the name of the cluster where the cronjob is deployed.
Scope & Multi-Cluster Behavior
AMP_REGION
The region of theAMP workspace.
Required
None
.....
Scope & Multi-Cluster Behavior
PROMETHUES_BACKEND
Set "amp" to work with Amazon Managded Promethues.
Required
None
.....
Storage & Paths
S3_BUCKET
Customer’s S3 bucket to store the exported metrics.
Required
None
Must exist in customer’s environment.
Storage & Paths
S3_PREFIX
S3 prefix where metrics will be stored.
Required
None
For example, k8s/prometheus
has to be the same s3_prefix if multiple per cluster integrations within the same cost center config.
Endpoint & Connectivity
PATH_PREFIX
Optional sub-path between host/port and the Prometheus metrics API.
Optional
None
-----
Auth & Identity
ROLE_ARN
ARN for IAM role to assume for authorization.
Optional
None
Used if assuming a role instead of direct access.
Auth & Identity
ROLE_EXTERNAL_ID
External ID for assumed role.
Optional
None
Only needed if the IAM role requires an external ID.
Endpoint & Connectivity
HOSTNAME
The external AMP API endpoint.
Required
None
Must be reachable from the pod running the Finout exporter cronjob.
Example: “https://aps-workspaces.{AMP_REGION}.amazonaws.com/workspaces/{AMP_WORKSPACE_ID}"
Query Window & Data Volume
TIME_FRAME
Time range per query in seconds.
Optional
3600
Lower values reduce query load and risk of OOM.
Query Window & Data Volume
BACKFILL_DAYS
Days of historical data to fetch on first run.
Optional
3d
Large values increase load and risk of slow queries.
To configure these fields, add them to the configuration file under the env section and use the name/value format for each desired field.
For example:
Ensure that the field names and the corresponding values are correctly specified to apply the desired configuration.
Run the command in a namespace of your choice, preferably the one where the Prometheus stack is deployed:
Trigger the job (instead of waiting for it to start):
Click Save. Result: The Prometheus cost center is created. The job automatically fetches Prometheus data from 3 days ago up to the current time.
4. Validate Your Kubernetes Integration
Confirm that your Prometheus Integration is working correctly:
S3 Validation
To confirm that Prometheus metrics are being exported correctly to your S3 bucket:
Navigate to the S3 path, for example:
s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/You should see a list of metric folders, such as:metric=cpu_requests/Open a metric folder and verify that
.json.gzfiles are uploaded. Each file should have a timestamp prefix, for example:s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gzIf these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.
Data Availability
Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.
For more information, please see the FAQs and Troubleshooting section.
Last updated
Was this helpful?