Creating Telemetry in Finout Using Prometheus Metrics

Overview

Leverage your existing Prometheus setup for telemetry creation. By exporting your custom Prometheus metrics to an S3 bucket and integrating them into Finout, you can enhance your cost allocation and financial analysis capabilities. There are two ways to create telemetry using Prometheus: 1) If you already have Finout’s Prometheus Metrics Exporter deployed, you can extend it also to collect and send custom telemetry data based on Prometheus metrics.

2) If you haven’t deployed Finout’s Prometheus Metrics Exporter, you’ll need to install it and set up the cron job to collect the required Prometheus usage metrics.

Prerequisites

  • Prometheus: Having a Prometheus cost center configured in Finout—whether it’s a per-cluster or a Centralized Prometheus Monitoring setup. S3 Bucket: A designated S3 bucket where Prometheus metrics will be exported.

    Note: It is recommended to use the same bucket that you use for CUR ingestion in Finout.

  • Finout Account: An active Finout account with API access.

Option 1: Create Telemetry Using an Existing Finout Metrics Exporter

If you already have Finout’s metrics exporter set up for collecting Prometheus usage metrics, you can easily extend its capabilities to handle custom telemetry data.

1. Update Your Existing Exporter’s YAML Configuration

Incorporate the Prometheus metrics into your existing cronjob’s YAML configuration file.

Important:

  • Adjust your existing cronjob YAML configuration: add an environment variable with “QUERY_” prefix to identify the metric query as custom telemetry data for Finout.- Make sure the metric values represent cumulative or incremental usage, as Finout aggregates the sum of all samples across the day.

  • If the metric is a gauge and a different aggregation is needed (average, count, max), please contact support.

Example Metric Query:

sum(increase(logstash_ingestion_byte_size_total[5m]))

This query example converts the byte counter into 5-minute increments and sums those increments. When Finout rolls the samples up by day, you get the total bytes ingested per tenant for that day.

Example Cronjob YAML Update (for per-cluster setup):

This example applies to per-cluster setups. If you're using a Centralized Prometheus Monitoring tool setup, refer to the documentation for your specific monitoring tool for guidance.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: finout-prometheus-exporter-job
spec:
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: finout-prometheus-exporter
              image: finout/finout-metrics-exporter:1.34.2
              imagePullPolicy: Always
              env:
  - name: S3_BUCKET
    value: "<BUCKET_NAME>"
  - name: S3_PREFIX
    value: "k8s/prometheus"
  - name: CLUSTER_NAME
    value: "<CLUSTER_NAME>"
  - name: HOSTNAME
    value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
  - name: PORT
    value: "9090"
  - name: QUERY_logstash_ingestion
    value: "sum(increase(logstash_ingestion_byte_size_total[5m]))"


restartPolicy: OnFailure

2.Validate your Kubernetes Integration Confirm that your Prometheus Integration is working correctly: S3 Validation

To confirm that Prometheus metrics are being exported correctly to your S3 bucket:

  1. Navigate to the S3 path, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/

You should see a list of metric folders, such as: metric=cpu_requests/

  1. Open a metric folder and verify that .json.gz files were uploaded. Each file should have a timestamp prefix, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gz

If these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.

Data Availability

Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.

Note: If any issues occur, share your exporter logs with Finout support at [email protected] for further investigation.

3.Notify Finout Support Once the telemetry data is available in your S3 bucket, notify Finout Support at [email protected] with your S3 bucket details and a sample file. For more information, please see the FAQs and Troubleshooting section.

Option 2: Create Telemetry when the Prometheus Metrics Exporter Isn’t Installed

If you don’t already have the Finout Metrics Exporter installed, deploy the dedicated CronJob to collect and export your Prometheus telemetry. This setup is intended for cases where you want to export Prometheus telemetry without using the full Kubernetes cost enrichment integration—the CronJob’s sole purpose is to handle telemetry.

If you also plan to use this CronJob for Kubernetes cost enrichment, follow the Kubernetes cost enrichment guide (see Overview) and use Option 1 for deployment.

1. Create a YAML Configuration

Create a YAML configuration file to specify the Prometheus metrics you want to export.

Important: Prefix custom metrics with QUERY_ in the YAML configuration to identify them as telemetry data for Finout.

  • Make sure the metric values already represent cumulative or incremental usage, as Finout aggregates the sum of all samples across the day.

  • If the metric is a gauge and a different aggregation is needed (average, count, max), please contact support.

Example Metric Query:

sum(increase(logstash_ingestion_byte_size_total[5m]))

This query converts the byte counter into 5-minute increments and sums those increments. When Finout rolls the samples up by day, you get the total bytes ingested per tenant for that day.

Example YAML Configuration:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: finout-prometheus-exporter-job
spec:
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: finout-prometheus-exporter
              image: finout/finout-metrics-exporter:1.34.2
              imagePullPolicy: Always
              env:
  - name: S3_BUCKET
    value: "<BUCKET_NAME>"
  - name: S3_PREFIX
    value: "k8s/prometheus"
  - name: CLUSTER_NAME
    value: "<CLUSTER_NAME>"
  - name: HOSTNAME
    value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
  - name: PORT
    value: "9090"
  - name: CUSTOM_QUERIES_MODE
    value: "override"
  - name: QUERY_logstash_ingestion
    value: "sum(increase(logstash_ingestion_byte_size_total[5m]))"

restartPolicy: OnFailure

2. Schedule the Cronjob

Set up the cronjob to run the Prometheus export at a regular interval (usually once an hour). The cronjob will execute the queries and export the resulting data to your S3 bucket.

3.Validate your Kubernetes Integration Confirm that your Prometheus Integration is working correctly: S3 Validation

To confirm that Prometheus metrics are being exported correctly to your S3 bucket:

  1. Navigate to the S3 path, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/

You should see a list of metric folders, such as: metric=cpu_requests/

  1. Open a metric folder and verify that .json.gz files were uploaded. Each file should have a timestamp prefix, for example: s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gz

If these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.

Data Availability

Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.

Note: If any issues occur, share your exporter logs with Finout support at [email protected] for further investigation.

For more information, please see the FAQs and Troubleshooting section.

4.Notify Finout Support

Once the telemetry data is available in your S3 bucket, notify Finout Support at [email protected] with your S3 bucket details and a sample file.

Last updated

Was this helpful?