# Prometheus FAQs

FAQs cover the most common questions about Finout’s Prometheus integrations across both [per-cluster](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-per-cluster-integration) and [centralized ](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/centralized-prometheus-monitoring-tool-integrations)Prometheus Monitoring tools setups. It provides clarity on how Prometheus metrics are collected and processed to support accurate Kubernetes cost attribution across your environments. If you need additional assistance, contact Finout at <support@finout.io>.

#### **How do I grant Finout access to a dedicated S3 bucket for Kubernetes data?**

If you’d like to export Kubernetes data to a dedicated S3 bucket, different from the one used for your AWS Cost and Usage Report (CUR), you can grant Finout access by creating an inline IAM policy:

1. Go to your newly created **IAM role**.

2. Click **Add permissions → Create inline policy**.

3. Select the **JSON** tab, and paste the following policy.\
   Replace `<BUCKET_NAME>` with the name of your dedicated S3 bucket (or use your existing CUR bucket):

   ​

   ```json
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Action": ["tag:GetTagKeys"],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": ["s3:Get*", "s3:List*"],
               "Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
           },
           {
               "Effect": "Allow",
               "Action": ["s3:Get*", "s3:List*"],
               "Resource": "arn:aws:s3:::<BUCKET_NAME>"
           }
       ]
   }
   ```

4. Click **Next** until the **Review** screen appears.

5. Name the policy `finout-access-policy_K8S`.

6. Click **Create policy** to attach it to your IAM role.

7. In the Finout console, complete **Step 1 – Connect S3 bucket** using your dedicated bucket details.

#### Do I need a separate Cost Center for every Kubernetes cluster?

No. If all your Kubernetes clusters (or CronJobs) use the same configuration and write their Prometheus metrics to the same S3 bucket and prefix, you can use a single Cost Center in Finout.

You only need a new Cost Center when clusters write to different S3 buckets or prefixes, or when your organization requires separate ownership boundaries—for example, if different departments or business units must export their Prometheus data to different buckets for security, compliance, or budgeting reasons.

#### How do I integrate Prometheus with Finout using Google GKE or Azure AKS?

Finout supports enriching **AWS (EKS), GCP (GKE), and Azure (AKS)** Kubernetes costs using Kubernetes metrics.\
However, this integration currently requires the Kubernetes metrics to be **uploaded to an AWS S3 bucket**.

**You have two options for uploading your metrics:**

1. U**se your own AWS S3 bucket**\
   Push your Kubernetes monitoring metrics (from GKE, AKS, or EKS) into an S3 bucket within your own infrastructure, then follow the standard integration steps.
2. **Use Finout’s AWS S3 bucket**\
   Push your Prometheus metrics to an S3 bucket hosted by Finout.\
   Once your metrics are in the bucket, contact Finout Support at <support@finout.io> to complete the onboarding and enrichment setup for your cluster.

#### Where is the finout/finout-metrics-exporter image hosted?

The finout/finout-metrics-exporter image is hosted on [Docker Hub](https://hub.docker.com/r/finout/finout-metrics-exporter).

#### What should I do if I'm using both Prometheus per-cluster and Centralized Prometheus Monitoring tools (e.g., Thanos, Mimir, VictoriaMetrics) in my account?

Finout supports both per-cluster Prometheus and Centralized Prometheus Monitoring tools or aggregated setups.

To support both configurations:

1. Create two Kubernetes Prometheus cost centers and deploy two cronjobs, one for each type, per the instructions above. Contact support at <support@finout.io> for help.
2. Configure exporters to write separate S3 paths using:
   1. S3\_BUCKET
   2. S3\_PREFIX
3. Update security policies to allow access to all relevant prefixes or buckets (if using a single bucket).

#### Do I need to add the Worker Node Role policy for every Kubernetes cluster?

Yes. You must add the **Kubernetes Worker Node Role Policy** to each cluster you integrate with Finout. If you have multiple clusters, repeat **Steps 2 and 3** for each one to ensure all clusters are correctly connected and sending data to Finout.

#### How does Finout handle Sidecar InitContainers in cost allocation?

Finout Metrics Exporter collects the CPU and memory usage and request metrics from  Sidecar InitContainers for calculating Kubernetes costs, ensuring accurate workload costs, and reflecting their full resource utilization.

#### Do short-lived initContainers affect cost calculations?

Short-lived initContainers still have a negligible impact on cost. Their brief runtime and small resource usage mean they continue to have little or no effect on overall cost attribution.

#### When will I see Sidecar InitContainer costs in Finout after enabling the integration?

After the Prometheus CronJob begins exporting metrics to your S3 bucket, Kubernetes cost data generally appears in Finout within 24 hours, as part of the same end-to-end process used for calculating Kubernetes costs. Updated cost attribution for Sidecar InitContainers may take up to 2 days to fully populate. This enhanced behavior is available starting from Finout Metrics Exporter version 1.30.

#### How can I reduce AMP costs when using Finout for Kubernetes cost allocation?

To reduce AMP costs, the best practice is to have AMP scrape **only the metrics required for Finout’s Kubernetes cost calculation**, assuming this is your only monitoring use case. These metrics must be scraped at a **one-minute sampling interval**, as increasing the interval will reduce cost-allocation accuracy. The required metrics are:

```
Kube_node_info
Container_cpu_usage_seconds_total
Container_memory_working_set_bytes
kube_node_status_capacity{resource="cpu"}
kube_node_status_capacity{resource="memory"}
kube_pod_container_resource_requests (resource=cpu)
kube_pod_init_container_resource_requests (resource=cpu)
kube_pod_container_resource_requests (resource=memory)
kube_pod_init_container_resource_requests (resource=memory)
Container_network_receive_bytes_total
Container_network_transmit_bytes_total
Kube_pod_labels
Kube_node_labels
Kube_namespace_labels
Kube_pod_info
Kube_replicaset_owner
Kube_job_owner
```

**My Kubernetes costs are missing for past periods - how do I recover them?**

Kubernetes data gaps occur if the Finout cronjob was inactive during past timeframes. You can resolve this by performing a backfill, which retrieves your historical cluster's Prometheus metrics to ensure your past spending and reports are complete.

To get started, reach out to Finout Support at <support@finout.io> with the following details:

* The time period you'd like to backfill (start date – end date)
* Your current CronJob YAML configuration
* The cluster(s) you need to be backfilled

Once our team prepares the configuration, you'll be asked to run a one-time Job in your relevant Kubernetes clusters. Finout will provide the exact YAML configuration.

After the Job completes, data will be available in Finout within approximately 2 days.

Before you request a backfill, note the following limitations:

* Multiple clusters must be backfilled separately — each cluster requires its own Job.
* Your CronJob image must be version **2.0.2 or above**.

For more information, please see the [Troubleshooting](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-troubleshooting) section.
