FAQs

FAQs cover the most common questions about Finout’s Prometheus integrations across both per-cluster and centralized Prometheus Monitoring tools setups. It provides clarity on how Prometheus metrics are collected and processed to support accurate Kubernetes cost attribution across your environments. If you need additional assistance, contact Finout at [email protected].

How do I grant Finout access to a dedicated S3 bucket for Kubernetes data?

If you’d like to export Kubernetes data to a dedicated S3 bucket, different from the one used for your AWS Cost and Usage Report (CUR), you can grant Finout access by creating an inline IAM policy:

  1. Go to your newly created IAM role.

  2. Click Add permissions → Create inline policy.

  3. Select the JSON tab, and paste the following policy. Replace <BUCKET_NAME> with the name of your dedicated S3 bucket (or use your existing CUR bucket):

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": ["tag:GetTagKeys"],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": ["s3:Get*", "s3:List*"],
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
            },
            {
                "Effect": "Allow",
                "Action": ["s3:Get*", "s3:List*"],
                "Resource": "arn:aws:s3:::<BUCKET_NAME>"
            }
        ]
    }

  4. Click Next until the Review screen appears.

  5. Name the policy finout-access-policy_K8S.

  6. Click Create policy to attach it to your IAM role.

  7. In the Finout console, complete Step 1 – Connect S3 bucket using your dedicated bucket details.

Do I need a separate Cost Center for every Kubernetes cluster?

No. If all your Kubernetes clusters (or CronJobs) use the same configuration and write their Prometheus metrics to the same S3 bucket and prefix, you can use a single Cost Center in Finout.

You only need a new Cost Center when clusters write to different S3 buckets or prefixes, or when your organization requires separate ownership boundaries—for example, if different departments or business units must export their Prometheus data to different buckets for security, compliance, or budgeting reasons.

Can different Kubernetes clusters in the same cloud environment use different monitoring tools when integrated with Finout?

Yes. When integrating with Finout, you can utilize various monitoring tools—such as Datadog, Prometheus, or other supported solutions—across multiple Kubernetes clusters within the same cloud environment (e.g., AWS, Azure, GCP, or Oracle Cloud). This approach enables you to tailor your monitoring strategy to the specific operational and financial needs of each cluster. For instance, a customer-facing production cluster might utilize Datadog for advanced observability, while an internal development cluster could employ Prometheus as a cost-effective alternative.

To ensure a successful integration with Finout in this setup:

1)Verify that each cluster operates independently and does not share node resources.

2)Configure each monitoring tool correctly to enable accurate data collection and ingestion into Finout.

How do I integrate Prometheus with Finout using Google GKE or Azure AKS?

Finout supports enriching AWS (EKS), GCP (GKE), and Azure (AKS) Kubernetes costs using Kubernetes metrics. However, this integration currently requires the Kubernetes metrics to be uploaded to an AWS S3 bucket.

You have two options for uploading your metrics:

  1. Use your own AWS S3 bucket Push your Kubernetes monitoring metrics (from GKE, AKS, or EKS) into an S3 bucket within your own infrastructure, then follow the standard integration steps.

  2. Use Finout’s AWS S3 bucket Push your Prometheus metrics to an S3 bucket hosted by Finout. Once your metrics are in the bucket, contact Finout Support at [email protected] to complete the onboarding and enrichment setup for your cluster.

Where is the finout/finout-metrics-exporter image hosted?

The finout/finout-metrics-exporter image is hosted on Docker Hub.

What should I do if I'm using both Prometheus per-cluster and Centralized Prometheus Monitoring tools (e.g., Thanos, Mimir, VictoriaMetrics) in my account?

Finout supports both per-cluster Prometheus and Centralized Prometheus Monitoring tools or aggregated setups.

To support both configurations:

  1. Create two Kubernetes Prometheus cost centers and deploy two cronjobs, one for each type, per the instructions above. Contact support at [email protected] for help.

  2. Configure exporters to write separate S3 paths using:

    1. S3_BUCKET

    2. S3_PREFIX

  3. Update security policies to allow access to all relevant prefixes or buckets (if using a single bucket).

Do I need to add the Worker Node Role policy for every Kubernetes cluster?

Yes. You must add the Kubernetes Worker Node Role Policy to each cluster you integrate with Finout. If you have multiple clusters, repeat Steps 2 and 3 for each one to ensure all clusters are correctly connected and sending data to Finout.

How does Finout handle Sidecar InitContainers in cost allocation?

Finout Metrics Exporter collects the CPU and memory usage and request metrics from Sidecar InitContainers for calculating Kubernetes costs, ensuring accurate workload costs, and reflecting their full resource utilization.

Do short-lived initContainers affect cost calculations?

Short-lived initContainers still have a negligible impact on cost. Their brief runtime and small resource usage mean they continue to have little or no effect on overall cost attribution.

When will I see Sidecar InitContainer costs in Finout after enabling the integration?

After the Prometheus CronJob begins exporting metrics to your S3 bucket, Kubernetes cost data generally appears in Finout within 24 hours, as part of the same end-to-end process used for calculating Kubernetes costs. Updated cost attribution for Sidecar InitContainers may take up to 2 days to fully populate. This enhanced behavior is available starting from Finout Metrics Exporter version 1.30.

For more information, please see the Troubleshooting section.

Last updated

Was this helpful?