Skip to main content
Connect to Kubernetes Prometheus
Updated over 2 months ago

Finout’s solution for managing container costs is completely agentless, reducing security risks and performance overhead by automatically identifying Kubernetes waste across any Kubernetes resource.

In this documentation, you will be guided step-by-step through what you need to do to integrate your Kubernetes data from Prometheus into the Finout app.

The Finout agentless Kubernetes integration consists of three simple steps:

  1. Configure an S3 bucket to which the Prometheus Kubernetes data will be exported. In this step, it is also required to set read-only permissions to Finout to read your Prometheus files.


    Note: ​It is highly recommended to use the same S3 bucket and permissions that are set for your AWS Cost and Usage Report (CUR) files.

  2. Add a policy to a K8s node worker to enable the CronJob to write permissions to your S3 bucket.

  3. Create a CronJob to export the data from Prometheus to your S3 bucket.

Step 1- Connect a S3 Bucket

The S3 bucket consists of the K8s data extracted from Prometheus. This bucket will have read-only permissions for Finout.

Important: If you have already connected an S3 bucket to Finout for the AWS Cost and Usage Report (CUR), you can use the same S3 bucket with the same ARN role already configured. The role and bucket information are automatically filled out in the Finout console. If you want to configure a different S3 bucket or haven’t set up one yet, go to Grant Finout Access to an S3 bucket.

Step 2- Add the K8s Worker Node Role Policy

If you have more than one cluster, repeat steps 2 and 3 for all of your clusters.

  1. Attach the following policy to the K8s worker node role. This enables the CronJob to write the K8S data from Prometheus into your S3 Bucket.

    '{
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "VisualEditor0",
    "Effect": "Allow",
    "Action": "s3:ListBucket",
    "Resource": "arn:aws:s3:::<BUCKET_NAME>",
    "Condition": {
    "StringEquals": {
    "s3:delimiter": "/"
    },
    "StringLike": {
    "s3:prefix": "k8s/prometheus*"
    }
    }
    },
    {
    "Sid": "VisualEditor1",
    "Effect": "Allow",
    "Action": [
    "s3:PutObject",
    "s3:GetObject",
    "s3:DeleteObject"
    ],
    "Resource": "arn:aws:s3:::<BUCKET_NAME>/k8s/prometheus/*"
    }
    ]
    }'

  2. Start a version of kube-state-metrics (most likely 2.0.0+). To fetch data from the Prometheus-kube-state-metrics daemon set, include the following flag to export all labels to Prometheus (or change the pattern to your required pattern):
    --metric-labels-allowlist=pods=[*],nodes=[*]

    Do this by adding an arg to the kube-state-metrics container, for example:
    ....

    spec:

    containers:

    1. args:

    2. --port=8080

    3. --metric-labels-allowlist=pods=[*],nodes=[*]

Step 3: Create and Configure the CronJob

  1. Copy the CronJob configuration below to a file (for example, cronjob.yaml) This is an example of a CronJob that schedules a Job every 30 minutes. The job queries Prometheus with a 5-second delay between queries so as not to overload your Prometheus stack.

  2. Run the command in a namespace of your choice, preferably the one where the Prometheus stack is deployed:
    kubectl create -f <filename>

  3. Trigger the job (instead of waiting for it to start):

    kubectl create job --from=cronjob/finout-prometheus-exporter-job finout-prometheus-exporter-job -n <namespace>

    The job automatically fetches Prometheus data from 3 days ago up to the current time.

CronJob Yaml configuration example

The K8s internal address for the Prometheus server must be structured as follows: <PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local


apiVersion: batch/v1
kind: CronJob
metadata:
name: finout-prometheus-exporter-job
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
concurrencyPolicy: Forbid
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: finout-prometheus-exporter
image: finout/finout-metrics-exporter:1.21.0
imagePullPolicy: Always
env:
- name: S3_BUCKET
value: "<BUCKET_NAME>"
- name: S3_PREFIX
value: "k8s/prometheus"
- name: CLUSTER_NAME
value: "<CLUSTER_NAME>"
- name: HOSTNAME
value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
- name: PORT
value: 9090
restartPolicy: OnFailure

After configuring the final step, the CronJob starts exporting data from Prometheus into your S3 bucket. After the integration, the Kubernetes cost data may take up to 24 hours to become available in your Finout account.

If this process fails, guidance is provided in the Finout console to help resolve the problem, or review the Troubleshooting FAQs as this will assist you to easily fix any errors.

Grant Finout Access to a S3 Bucket

If you want to export the Kubernetes data into a different S3 Bucket than the one with the AWS Cost and Usage Report (CUR) files, follow the steps below:

  1. Go to your newly created role.

  2. Click Add permissions and select Create inline policy.

  3. Select JSON, and paste the following (change the <BUCKET_NAME> to the bucket you created or to your existing CUR bucket):

    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": [
    "tag:GetTagKeys"
    ],
    "Resource": "*"
    },
    {
    "Effect": "Allow",
    "Action": [
    "s3:Get*",
    "s3:List*"
    ],
    "Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
    },
    {
    "Effect": "Allow",
    "Action": [
    "s3:Get*",
    "s3:List*"
    ],
    "Resource": "arn:aws:s3:::<BUCKET_NAME>"
    }
    ]
    }

  4. Click Next until the review screen appears, and name the policy finest-access-policy_K8S.

  5. Click Create policy to create your policy for the IAM role.

  6. Fill in all the details in the Finout console

Troubleshooting FAQs

The following troubleshooting FAQs explain common integration issues and how you can easily resolve them.

  • Finout could not assume the role

    If Finout cannot assume a role that was created on your AWS console, review the role created for Finout and verify the trust policy. Also, verify that the external ID is the one provided by Finout.

  • Finout could not access the provided bucket

    Review the relevant policy on the bucket according to the documentation and verify the region.

  • S3 bucket doesn't exist

    Finout could not locate the provided S3 bucket. Make sure that the bucket provided exists.

  • Finout could not read from your S3 bucket

    Make sure that the relevant read permissions for Finout were created. If the account that writes the Prometheus files into the S3 bucket is not the same account that created the ARN role for Finout, generate a new ARN role for Finout from the former account (that is, the account that writes into the S3 bucket) and send it to Finout, following the instructions below.

    If you already created a trust policy for Finout when setting up your AWS connection, you should use the same trust policy (Read more about how to Grant Finout access to your S3 bucket).

    Use the following policy for your ARN role:

"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"tag:GetTagKeys"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>"
}
]
}

  • Ensure Kubernetes Monitoring Compatibility with Finout and Prometheus

    To ensure your Kubernetes monitoring is compatible with Finout, you need a Prometheus-compatible system (such as VictoriaMetrics, Thanos, Cortex, M3) and a functioning Kubernetes cluster for deploying the Finout cronjob. Follow these integration steps:

    1. Validate that your system exports the correct metrics, including memory, CPU, network usage, and node info.

    2. Deploy the Finout cronjob in your Kubernetes cluster.

    3. Configure connectivity with the necessary environment variables.

    4. Ensure Finout has the required permissions to access your metrics export.

    For detailed instructions and requirements, refer to the Finout integration documentation.

  • The CronJob cannot find the Prometheus host

    Make sure the host and port are correct (using “kubectl get service”) and configure these as the HOST / PORT ENV VARS in the CronJob.

  • The CronJob has no write permissions to the S3 bucket

    Apply the correct policy on the node worker or the pod. See instructions here.

  • Missing memory metrics in Finout

    After updating or adjusting Prometheus configurations, memory metrics may no longer appear in Finout dashboards.

    Immediately following the identification of this issue, ensure you’re familiar with the best practices for configuring and reloading Prometheus by consulting the official Prometheus documentation.

    Solution

    1. Verify the recording rule: Check that the "node_namespace_pod_container:container_memory_working_set_bytes" recording rule is correctly set up in your Prometheus environment.

    2. Add or update the rule: If the rule is absent or improperly configured, add or amend it within your Prometheus configuration settings.

    3. Reload the configuration: Implement the new or modified rule by reloading Prometheus’s configuration.

    Steps

    • Access the configuration: Open the Prometheus configuration, either directly through the file or via the UI.

    • Edit the rules: In the recording rules section, verify the presence of "node_namespace_pod_container:container_memory_working_set_bytes". If it’s not there, insert or correct it.

    • Apply the changes: Reload or restart Prometheus to apply and activate the rule.

    This approach ensures you directly address the issue with missing memory metrics in Finout, supported by a foundational understanding of Prometheus configuration and management principles.

  • Finout exporter timeout with large Kubernetes clusters

    When all clusters share the same Prometheus disk space settings, larger clusters can quickly use up disk space due to their higher metrics output, limiting data retention to a few hours. This can lead to the Finout exporter timing out, particularly when fetching the previous day's metrics. To address this, adjust the metric retention settings according to each cluster's size and output, ensuring sufficient disk space for longer data retention.

  • Monitoring various Kubernetes clusters using distinct tools within a single AWS environment for the Finout integration

    When integrating with Finout, you can effectively monitor different Kubernetes clusters using specific monitoring tools, such as Datadog and Prometheus, all within the same AWS environment. This method supports the development of customized monitoring strategies that cater to the particular needs and budgetary considerations of each cluster. For example, a cluster dedicated to customer-facing applications might benefit from the sophisticated capabilities of Datadog, while a cluster utilized for internal operations might find Prometheus to be a more budget-friendly option.

    To ensure successful integration with Finout in such a setup:

    • Verify that each cluster operates independently without shared node resources.

    • Configure each cluster's monitoring tool properly to ensure seamless data capture and integration.

  • Missing metrics and deployment visibility

    Deployments may not appear in the monitoring dashboard if specific metrics are missing due to retention policies or configurations not capturing them. It's crucial to ensure all essential metrics are retained. Reviewing and adjusting the configuration to capture and retain these metrics can improve visibility and ensure comprehensive monitoring.

  • Metrics exporter ran out of memory

    The metrics exporter might face out-of-memory (OOM) errors with insufficient resources, especially when dealing with a lot of data. This limitation can prevent the exporter from functioning properly. Allocating more resources based on its needs can solve this. By fine-tuning memory allocation to match the data workload, the exporter can run smoothly and without breaks.

    Still need help? Please feel free to reach out to our team at [email protected].

Did this answer your question?