Mimir Integration

Overview

This page covers Centralized Prometheus Monitoring via Mimir: Mimir exposes a single, PromQL-compatible API endpoint that aggregates metrics from multiple Prometheus servers across clusters. Because all metrics are reachable through one endpoint, Finout can read them without querying each cluster separately—a more efficient approach for large, multi-cluster environments. For a holistic introduction to Prometheus in Finout and alternative topologies, see the Prometheus Integration Overview.

Finout supports Kubernetes enrichment across AWS, GCP, and Azure using the same pattern: Mimir-sourced metrics are exported into your connected S3 bucket, and Finout reads them to enrich Kubernetes costs.

Mimir Setup at a Glance

  1. Ensure a connected S3 bucket for exporting Mimir’s Prometheus metrics.

  2. Add the required Kubernetes worker node role policy.

  3. Configure and create the CronJob (Finout Metrics Exporter) targeting your Mimir API. During configuration, choose an authentication method from the supported options and set the corresponding environment variables in the YAML file; then apply/deploy.

  4. Make any required YAML updates and confirm the CronJob can write to S3.

What happens next:

Finout creates a Prometheus (centralized) Cost Center, automatically links it to your AWS Cost Center, and begins enrichment. Data appears in Finout within ~2 days (cloud billing delay).

1. Connect an S3 Bucket

  1. In Finout, navigate to Settings > Cost Centers > Kubernetes. The Connect S3 Bucket step appears.

  2. Connect a S3 bucket that contains the Kubernetes metrics collected from Prometheus using this integration. Finout is granted read-only permissions to access the data. Ensure that you have already connected an S3 bucket to Finout for the AWS Cost and Usage Report (CUR). You can reuse the same S3 bucket and IAM role, and Finout will automatically populate the Role ARN, Bucket name, and External ID fields in the console. If you want to use a different S3 bucket or haven’t configured one yet, follow the steps in Grant Finout Access to an S3 Bucket. Fill in the following fields:

    1. External ID - this is taken from your existing AWS Cost Center and is filled in by default. Use this same External ID in the IAM role’s trust policy to grant Finout permissions to read from the S3 bucket that stores your Prometheus metrics.

    2. Cost Center - Select an AWS cost center account

    3. ARN Role - Provide the ARN of the IAM role that grants Finout read-only access to this S3 bucket. When creating or updating this role, make sure you use the External ID from the Finout console in the role’s trust policy.

    4. Bucket Name - Enter the name of the S3 bucket that stores your Prometheus metrics. Use the bucket name only (no s3:// and no path). It must be in the Region you selected and readable by the Role ARN.

    5. Region - AWS region of the bucket (e.g., us-east-1). Must match the bucket’s actual region.

  3. Click Next.

Note: Finout supports enriching AWS (EKS), GCP (GKE), and Azure (AKS) Kubernetes costs using Kubernetes metrics. However, this integration currently requires the Kubernetes metrics to be uploaded to an AWS S3 bucket. To connect other cost centers, contact Finout Support at [email protected].

2. Add the Kubernetes Worker Node Role Policy

In this step, you will grant the cronjob permissions to write the Kubernetes metrics into the bucket you configured in the previous step.

  1. Attach this policy to the Kubernetes node role, or to the IAM role used by your CronJob, so the CronJob has the S3 access it needs:

    • Write to store exported metrics in your bucket.

    • Read to check its saved state in the bucket and know from which timestamp to continue.

    • Delete to remove files from the bucket older than the retention period (30 days by default, configurable).

    '{
       "Version": "2012-10-17",
       "Statement": [
           {
               "Sid": "FinoutBucketPermissions",
               "Effect": "Allow",
               "Action": "s3:ListBucket",
               "Resource": "arn:aws:s3:::<BUCKET_NAME>",
               "Condition": {
                   "StringEquals": {
                       "s3:delimiter": "/"
                   },
                   "StringLike": {
                       "s3:prefix": "k8s/prometheus*"
                   }
               }
           },
           {
               "Sid": "FinoutMetricFilesPermissions",
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject",
                   "s3:DeleteObject"
               ],
               "Resource": "arn:aws:s3:::<BUCKET_NAME>/k8s/prometheus/*"
           }
       ]
    }'
  2. Use kube-state-metrics version 2.0.0 or later. If your cluster uses the prometheus-kube-state-metrics DaemonSet, add the flag below so kube-state-metrics exports all required labels to your Prometheus endpoint (you can adjust the pattern to match your setup):

    --metric-labels-allowlist=pods=[*],nodes=[*] ​Do this by adding an arg to the kube-state-metrics container, for example:

    spec:
    containers:
      args:
       --port=8080
       --metric-labels-allowlist=pods=[*],nodes=[*]

  3. Click Next.

3. Create and Configure the CronJob

  1. Identify your authentication method: Select the authentication method you use to access the API from the options listed below. After selecting your method, you’ll add the corresponding environment variables and their values to the CronJob YAML in the next step.

    1. No authentication - For self-managed methods. No required environment variable.

    2. API Token authentication - Static token sent as a token header (with Content-Type: application/json) on Prometheus requests. PROMETHEUS_AUTH_TOKEN

    3. Bearer Token authentication -Bearer token placed in the Authorization: Bearer header for Prometheus calls. PROMETHEUS_BEARER_AUTH_TOKEN.

    4. Username and Password authentication

      1. PROMETHEUS_USERNAME

      2. PROMETHEUS_PASSWORD

        • Basic Auth credentials sent on Prometheus requests.

    5. Tenant ID - PROMETHEUS_X_SCOPE_ORGID

      • Tenant ID sent in X-Scope-OrgID header, for multi-tenant setups.

  2. Copy the CronJob configuration below to a file (for example, cronjob.yaml), make sure to include the relevant authentication environment variable you selected at the previous step:

Example YAML:

apiVersion: batch/v1
kind: CronJob
metadata:
 name: finout-prometheus-exporter-job
spec:
 successfulJobsHistoryLimit: 1
 failedJobsHistoryLimit: 1
 concurrencyPolicy: Forbid
 schedule: "*/30 * * * *"
 jobTemplate:
   spec:
     template:
       spec:
         containers:
           - name: finout-prometheus-exporter
             image: finout/finout-metrics-exporter:1.34.2
             imagePullPolicy: Always
             env:
               - name: S3_BUCKET
                 value: "<BUCKET_NAME>"
               - name: S3_PREFIX
                 value: "k8s/prometheus"
               - name: CLUSTER_NAME
                 value: "<CLUSTER_NAME>"
               - name: HOSTNAME
                 value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
               - name: PORT
                 value: 9090
               - name: SCHEME
                 value: https
               - name: PATH_PREFIX
                 value: "< Optional path prefix appended to the Prometheus base URL when the API is served under a subpath or reverse proxy (e.g., data/metrics).>"
               - name: CLUSTER_LABEL_NAME
                 value: "<the label name in your metrics indicates the origin cluster>"
         restartPolicy: OnFailure
  • This is an example of a CronJob that schedules a Job every 30 minutes.

  • The job queries Prometheus with a 5-second delay between queries so as not to overload your Prometheus stack.

  1. Modify the suggested YAML file above, if needed.

Mimir YAML Environment Variables:

Category
Variable
Description
Required / Optional
Default Value
Notes

Scope & Multi-Cluster Behavior

CLUSTER_LABEL_NAME

The cluster label name defines the label whose values represent cluster names in your metrics , allowing Finout to identify and group metrics by cluster from the single metrics endpoint.

Required

cluster

The value is "cluster" by default.

Endpoint & Connectivity

METRICS_READINESS_PATH

Allows Finout’s exporter to perform readiness validation, ensuring the Mimir API is fully available before scraping begins, using a custom readiness endpoint to change the default config.

Optional

/-/ready

This is unique for Mimir.

Scope & Multi-Cluster Behavior

CLUSTER_NAME

The cluster name defines the folder where metrics are stored in S3.

Required

Cluster names are extracted directly from the metric data. However, this environment variable is still required and determines the folder name under which all centralized metrics will be temporarily stored. It is recommended to set it to the name of the cluster where the cronjob is deployed.

Storage & Paths

S3_BUCKET

Customer’s S3 bucket to store the exported metrics.

Required

None

Must exist in customer’s environment.

Storage & Paths

S3_PREFIX

S3 prefix where metrics will be stored.

Required

None

For example, k8s/prometheus

has to be the same s3_prefix if multiple per cluster integrations within the same cost center config.

Endpoint & Connectivity

SCHEME

The protocol used when calling the Prometheus metrics endpoint: either https or http

Optional

http

-----

Endpoint & Connectivity

PATH_PREFIX

Optional sub-path between host/port and the Prometheus metrics API.

Optional

None

-----

Auth & Identity

ROLE_ARN

ARN for IAM role to assume for authorization.

Optional

None

Used if assuming a role instead of direct access.

Auth & Identity

ROLE_EXTERNAL_ID

External ID for assumed role.

Optional

None

Only needed if the IAM role requires an external ID.

Endpoint & Connectivity

HOSTNAME

The Prometheus compatible API endpoint.

Optional

localhost

Must be reachable from the pod running the Finout exporter cronjob.

Endpoint & Connectivity

PORT

API port

Optional

9090

Standard Prometheus port.

Query Window & Data Volume

TIME_FRAME

Time range per query in seconds.

Optional

3600

Lower values reduce query load and risk of OOM.

Query Window & Data Volume

BACKFILL_DAYS

Days of historical data to fetch on first run.

Optional

3d

Large values increase load and risk of slow queries.

To configure these fields, add them to the configuration file under the env section and use the name/value format for each desired field.

For example:

env:
  - name: TIME_FRAME
    value: "3600"
  - name: BACKFILL_DAYS
    value: "3d"

Ensure that the field names and the corresponding values are correctly specified to apply the desired configuration.

  1. Run the command in a namespace of your choice, preferably the one where the Prometheus stack is deployed: ​kubectl create -f <filename>

  2. Trigger the job (instead of waiting for it to start): kubectl create job --from=cronjob/finout-prometheus-exporter-job finout-prometheus-exporter-job -n <namespace>

    The job automatically fetches Prometheus data from 3 days ago up to the current time.

    Note: This can be changed by modifying your BACKFILL_DAYS variable.

  3. Click Save. The Prometheus cost center is created.

4. Validate Your Kubernetes Integration

Confirm that your Prometheus Integration is working correctly:

S3 Validation

To confirm that Prometheus metrics are being exported correctly to your S3 bucket:

  1. Navigate to the S3 path, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/ You should see a list of metric folders, such as: metric=cpu_requests/

  2. Open a metric folder and verify that .json.gz files are uploaded. Each file should have a timestamp prefix, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gz If these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.

Data Availability

Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.

Note: If any issues occur, share your exporter logs with Finout support at [email protected] for further investigation.

For more information, please see the FAQs and Troubleshooting section.

Last updated

Was this helpful?