Thanos Integration (New/Beta)
This Integration is currently in Beta, contact Finout Support to enable.
Overview
Centralized Prometheus Monitoring via Thanos exposes a single, PromQL-compatible API endpoint that aggregates metrics from multiple Prometheus servers across clusters. All metrics are accessible through a single endpoint, allowing Finout to retrieve them without querying each cluster separately for a more efficient approach in large, multi-cluster environments. For a holistic introduction to Prometheus in Finout and alternative topologies, see the Prometheus Integration Overview.
Finout provides a consistent Kubernetes enrichment process across AWS, GCP, and Azure. Metrics are exported to your connected S3 bucket, where Finout automatically reads and uses them to enrich cloud providers data with Kubernetes abstractions.
Thanos Setup at a Glance
Set Collection Method: Name your integration and select your metrics collection method.
Connect S3 Bucket: The destination for Kubernetes metrics.
Set cronjob permissions: This allows the cronjob to write the metrics into your S3 bucket.
Configure the create cronjob YAML using the template, and deploy it in the cluster.
Select the authentication method.
Set the required environment variables in the YAML and make additional adjustments, then deploy it in your cluster.
Select cost centers: This enriches with the integrated Kubernetes metrics.
Validate your Kubernetes Integration: Ensure that the Prometheus metrics are exported correctly to S3.
What happens next:
Finout creates a centralized Prometheus Cost Center that connects all clusters monitored by Thanos, links it to your existing AWS Cost Center, and starts enrichment. Kubernetes cost data usually appears in Finout within about two days, reflecting normal cloud billing delays.
Note:
This flow is very similar to the Per-Cluster integration, with a few YAML adjustments (notably the Mimir endpoint and authentication variables). We’ll link to the Per-Cluster guide and list the specific Mimir auth options and examples later in this doc.
Linking a Prometheus Cost Center to a non-AWS Cost Center (GCP or Azure) currently requires Support to complete on Finout’s side and cannot be done in-app.
1. Set Collection Method

In Finout, navigate to Settings > Cost Centers > Prometheus (Kubernetes). The Set Collection Method step appears.

Name your Prometheus integration.
Set your metrics collection method: Select Centralized Prometheus Monitoring Tool.

Select Thanos.
Click Next. The Connect S3 Bucket step appears.
2. Connect S3 Bucket

Connect a S3 bucket that contains the Kubernetes metrics collected from Prometheus using this integration. Finout is granted read-only permissions to access the data. Ensure that you have already connected an S3 bucket to Finout for the AWS Cost and Usage Report (CUR). You can reuse the same S3 bucket and IAM role, and Finout will automatically populate the Role ARN, Bucket name, and External ID fields in the console. If you want to use a different S3 bucket or haven’t configured one yet, follow the steps in Grant Finout Access to an S3 Bucket. Fill in the following fields:
External ID - this is taken from your existing AWS Cost Center and is filled in by default. Use this same External ID in the IAM role’s trust policy to grant Finout permissions to read from the S3 bucket that stores your Prometheus metrics.
ARN Role - Provide the ARN of the IAM role that grants Finout read-only access to this S3 bucket. When creating or updating this role, make sure you use the External ID from the Finout console in the role’s trust policy.
Bucket Name - Enter the name of the S3 bucket that stores your Prometheus metrics. Use the bucket name only (no
s3://and no path). It must be in the Region you selected and readable by the Role ARN.S3 prefix - S3 prefix (folder path) in the bucket used to store Prometheus metrics (default is
k8s/prometheus).Region - AWS region of the bucket (e.g., us-east-1). Must match the bucket’s actual region.
Click Next.
Important:
If you want to integrate Finout with more than one cluster, repeat Step 3 (Set CronJob Permissions) and Step 4 (Configure and Create the Cronjob) for each cluster. If the clusters belong to the same Cost Center, make sure they all use the same
S3_PREFIX.The values
BUCKET_NAMEandS3_PREFIXacross steps 3 and 4 need to be identical to the values added in this step.
3. Set CronJob Permissions
In this step, you will grant the cronjob permissions to write the Kubernetes metrics into the bucket you configured in the previous step.

Configure Policy: Attach this policy to the Kubernetes node role, or to the IAM role used by your CronJob, so the CronJob has the S3 access it needs:
Write to store exported metrics in your bucket.
Read to check its saved state in the bucket and know from which timestamp to continue.
Delete to remove files from the bucket older than the retention period (30 days by default, configurable).
Prepare kube-state-metrics (Prerequisite): Use kube-state-metrics version 2.0.0 or later. If your cluster uses the prometheus-kube-state-metrics DaemonSet, add the flag below so kube-state-metrics exports all required labels to your Prometheus endpoint (you can adjust the pattern to match your setup):
--metric-labels-allowlist=pods=[*],nodes=[*]Do this by adding an arg to the kube-state-metrics container, for example:Click Next.
4. Configure and Create the CronJob

Identify your authentication method: Select the authentication method you use to access the API from the options listed below. After selecting your method, you’ll add the corresponding environment variables and their values to the CronJob YAML in the next step.
No authentication - For self-managed methods. No required environment variable.
API Token authentication - Static token sent as a token header (with Content-Type: application/json) on Prometheus requests.
PROMETHEUS_AUTH_TOKENBearer Token authentication -Bearer token placed in the Authorization: Bearer header for Prometheus calls.
PROMETHEUS_BEARER_AUTH_TOKEN.Username and Password authentication
PROMETHEUS_USERNAMEPROMETHEUS_PASSWORDBasic Auth credentials sent on Prometheus requests.
Tenant ID -
PROMETHEUS_X_SCOPE_ORGIDTenant ID sent in X-Scope-OrgID header, for multi-tenant setups.
Copy the CronJob configuration below to a file (for example, cronjob.yaml), make sure to include the relevant authentication environment variable you selected at the previous step:
Example YAML:
Note: Add your relevant authentication method environment variables and values from the previous step.
This is an example of a CronJob that schedules a Job every 30 minutes.
The job queries Prometheus with a 5-second delay between queries so as not to overload your Prometheus stack.
Modify the suggested YAML file above, if needed.
Thanos YAML Environment Variables:
Scope & Multi-Cluster Behavior
CLUSTER_LABEL_NAME
The cluster label name defines the label whose values represent cluster names in your metrics , allowing Finout to identify and group metrics by cluster from the single metrics endpoint.
Required
cluster
The value is "cluster" by default.
Endpoint & Connectivity
METRICS_READINESS_PATH
Allows Finout’s exporter to perform readiness validation, ensuring the Thanos API is fully available before scraping begins, using a custom readiness endpoint to change the default config.
Optional
/-/ready
This is unique for Thanos.
Scope & Multi-Cluster Behavior
CLUSTER_NAME
The cluster name defines the folder where metrics are stored in S3 and also appears as the cluster name within the Finout app.
Required
None
Cluster names are extracted directly from the metric data. However, this environment variable is still required and determines the folder name under which all centralized metrics will be temporarily stored. It is recommendedto set it to the name of the cluster where the cronjob is deployed.
Auth & Identity
PROMETHEUS_AUTH_TOKEN
Static token sent as a token header (with Content-Type: application/json) on Prometheus requests.
Optional
None
-----
Auth & Identity
PROMETHEUS_BEARER_AUTH_TOKEN
Bearer token placed in the Authorization: Bearer header for Prometheus calls.
Optional
None
-----
Auth & Identity
PROMETHEUS_USERNAME
The Prometheus username. It is recommended to use with the password.
Optional
None
-----
Auth & Identity
PROMETHEUS_PASSWORD
Basic Auth credentials sent on Prometheus requests.
Optional
None
-----
Auth & Identity
PROMETHEUS_X_SCOPE_ORGID
Tenant ID sent in X-Scope-OrgID header, for multi-tenant setups.
Optional
None
-----
Auth & Identity
ROLE_ARN
ARN for IAM role to assume for authorization.
Optional
None
Used if assuming a role instead of direct access.
Auth & Identity
ROLE_EXTERNAL_ID
External ID for assumed role.
Optional
None
Only needed if the IAM role requires an external ID.
Storage & Paths
S3_BUCKET
Customer’s S3 bucket to store the exported metrics.
Required
None
Must exist in customer’s environment.
Storage & Paths
S3_PREFIX
S3 prefix where metrics will be stored.
Required
None
For example, k8s/prometheus
has to be the same s3_prefix if multiple per cluster integrations within the same cost center config.
Endpoint & Connectivity
SCHEME
The protocol used when calling the Prometheus metrics endpoint: either https or http
Optional
http
-----
Endpoint & Connectivity
PATH_PREFIX
Optional sub-path between host/port and the Prometheus metrics API.
Optional
None
-----
Endpoint & Connectivity
HOSTNAME
The Prometheus compatible API endpoint.
Optional
localhost
Must be reachable from the pod running the Finout exporter cronjob.
Endpoint & Connectivity
PORT
API port
Optional
9090
Standard Prometheus port.
Query Window & Data Volume
TIME_FRAME
Time range per query in seconds.
Optional
3600
Lower values reduce query load and risk of OOM.
Query Window & Data Volume
BACKFILL_DAYS
Days of historical data to fetch on first run.
Optional
3d
Large values increase load and risk of slow queries.
Query Window & Data Volume
INITIAL_BACKFILL_DATE
Defines the initial backfill start date
Optional
None
Available starting from the Exporter’s v2.0.0
value should be in YYYY-MM-DD format.
limited to maximum 3 days.
Query Window & Data Volume
QUERIES_BATCH_SIZE
Controls how many queries are processed per batch to manage memory usage.
Optional
5
Available starting from the Exporter’s v2.0.0
Default is 5. Lowering it reduces memory usage but may increase overall runtime.
Minimum is 1
If a customer sees OOM kills / memory pressure, recommend reducing QUERIES_BATCH_SIZE before changing anything else.
To configure these fields, add them to the configuration file under the env section and use the name/value format for each desired field.
For example:
Ensure that the field names and the corresponding values are correctly specified to apply the desired configuration.
Run the command in a namespace of your choice, preferably the one where the Prometheus stack is deployed:
kubectl create -f <filename>Trigger the job (instead of waiting for it to start):
kubectl create job --from=cronjob/finout-prometheus-exporter-job finout-prometheus-exporter-job -n <namespace>The job automatically fetches Prometheus data from 3 days ago up to the current time.
Click Save. The Prometheus cost center is created.
5. Select Cost Centers

Select the cost centers that this integration will enrich with Kubernetes metrics. These cost centers can then attribute Kubernetes costs for supported services (EKS, GKE, and AKS) by namespace, workload, and label.
Click Complete Integration. The cost center will be created in about 48 hours.

6. Validate Your Kubernetes Integration
Confirm that your Prometheus Integration is working correctly:
S3 Validation
To confirm that Prometheus metrics are being exported correctly to your S3 bucket:
Navigate to the S3 path, for example:
s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/You should see a list of metric folders, such as:metric=cpu_requests/Open a metric folder and verify that
.json.gzfiles are uploaded. Each file should have a timestamp prefix, for example:s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gzIf these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.
Data Availability
Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.
For more information, please see the FAQs and Troubleshooting section. For new updates to the Prometheus Exporter introduced between versions and their impact on cost calculation, refer to the release notes.
Last updated
Was this helpful?
