Finout’s solution for managing container costs is completely agentless, reducing security risks and performance overhead by automatically identifying Kubernetes waste across any Kubernetes resource.
In this topic, you will be guided step-by-step through what you need to do to integrate your Kubernetes data from Prometheus into the Finout app.
The Finout agentless Kubernetes integration consists of three simple steps:
Configure an S3 bucket to which the Prometheus Kubernetes data will be exported. In this step, it is also required to set read-only permissions to Finout to read your Prometheus files.
It is highly recommended to use the same S3 bucket and permissions that are set for your AWS Cost and Usage Report (CUR) files.Add a policy to a K8s node worker to enable the CronJob write permissions to your S3 bucket.
Create a CronJob to export the data from Prometheus to your S3 bucket.
Step 1: Connect an S3 Bucket
The S3 bucket consists of the K8s data extracted from Prometheus. This bucket will have read-only permissions for Finout.
If you have already connected an S3 bucket to Finout for the AWS Cost and Usage Report (CUR), you can use the same S3 bucket with the same ARN role already configured. The role and bucket information are automatically filled out in the Finout console. If you want to configure a different S3 bucket or haven’t set up one yet, go to Grant Finout Access to an S3 bucket.
Step 2: Add the K8s Worker Node Role Policy
If you have more than one cluster, repeat steps 2 and 3 for all of your clusters.
Attach the following policy to the K8s worker node role. This enables the CronJob to write the K8S data from Prometheus into your S3 Bucket.
'{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<BUCKET_NAME>",
"Condition": {
"StringEquals": {
"s3:delimiter": "/"
},
"StringLike": {
"s3:prefix": "k8s/prometheus*"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>/k8s/prometheus/*"
}
]
}'Start a version of kube-state-metrics (most likely 2.0.0+). In order to fetch data the Prometheus-kube-state-metrics daemon set, include the following flag to export all labels to Prometheus (or change the pattern to your required pattern):
-metric-labels-allowlist=pods=[],nodes=[]
Do this by adding an arg to the kube-state-metrics container, for example:
....spec:
containers:args:
--port=8080
--metric-labels-allowlist=pods=[],nodes=[]
Step 3: Create and Configure the CronJob
Copy the CronJob configuration below to a file (for example, cronjob.yaml) This is an example of a CronJob that schedules a Job every 30 minutes. The job queries Prometheus with a 5 second delay between queries so as not to overload your Prometheus stack.
Run the command in a namespace of your choice, preferably the one where the Prometheus stack is deployed:
kubectl create -f <filename>Trigger the job (instead of waiting for it to start):
kubectl create job --from=cronjob/finout-prometheus-exporter-job finout-prometheus-exporter-job -n <namespace>
The job automatically fetches Prometheus data from 3 days ago up to the current time.
CronJob Yaml Configuration Example
The K8s internal address for the Prometheus server must be structured as follows: <PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local
apiVersion: batch/v1
kind: CronJob
metadata:
name: finout-prometheus-exporter-job
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
concurrencyPolicy: Forbid
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: finout-prometheus-exporter
image: finout/finout-metrics-exporter:1.11.0
imagePullPolicy: Always
env:
- name: S3_BUCKET
value: "<BUCKET_NAME>"
- name: S3_PREFIX
value: "k8s/prometheus"
- name: CLUSTER_NAME
value: "<CLUSTER_NAME>"
- name: HOSTNAME
value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
- name: PORT
value: 9090
restartPolicy: OnFailure
After configuring the final step, the CronJob starts exporting data from Prometheus into your S3 bucket. After the integration, the Kubernetes cost data may take up to 24 hours to become available in your Finout account.
If this process fails, guidance is provided in the Finout console to help resolve the problem, or review the Troubleshooting FAQs as this will assist you to easily fix any errors.
Grant Finout Access to an S3 Bucket
If you want to export the Kubernetes data into a different S3 Bucket than the one with the AWS Cost and Usage Report (CUR) files, follow the steps below:
Go to your newly created role.
Click Add permissions and select Create inline policy.
Select JSON, and paste the following (change the <BUCKET_NAME> to the bucket you created or to your existing CUR bucket):
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"tag:GetTagKeys"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>"
}
]
}Click Next until the review screen appears, and name the policy
finest-access-policy_K8S.
Click Create policy to create your policy for the IAM role.
Fill in all the details in the Finout console
Troubleshooting FAQs
The following troubleshooting FAQs explains common integration issues and how you can easily resolve them.
Finout could not assume role
If Finout cannot assume a role that was created on your AWS console, review the role created for Finout and verify the trust policy. Also verify that the external ID is the one provided by Finout.
Finout could not access the provided bucket
Review the relevant policy on the bucket according to the documentation and verify the region.
S3 bucket doesn't exist
Finout could not locate the provided S3 bucket. Make sure that the bucket provided exists.
Finout could not read from your S3 bucket
Make sure that the relevant read permissions for Finout were created. If the account that writes the Prometheus files into S3 bucket is not the same account that created the ARN role for Finout, generate a new ARN role for Finout from the former account (that is, the account that writes into the S3 bucket) and send it to Finout, following the instructions below.
If you already created a trust policy for Finout when setting up your AWS connection, you should use the same trust policy. (See here more about how to Grant Finout access to your S3 bucket)
Use the following policy for your ARN role:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"tag:GetTagKeys"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::<BUCKET_NAME>"
}
]
}
The CronJob cannot find Prometheus host
Make sure the host and port are correct (using “kubectl get service”) and configure these as the HOST / PORT ENV VARS in the CronJob.
The CronJob has no write permissions to the S3 bucket
Apply the correct policy on the node worker or the pod. See instructions here.