# Creating Telemetry in Finout Using Prometheus Metrics

## Overview

Leverage your existing Prometheus setup to create telemetry. By exporting your custom Prometheus metrics to an S3 bucket and integrating them into Finout, you can enhance your cost allocation and financial analysis capabilities.\
\
**There are two ways to create telemetry using Prometheus:**\
1\) [If you already have Finout’s Prometheus Metrics Exporter deployed](#option-1-create-telemetry-using-an-existing-finout-metrics-exporter), you can also extend it to collect and send custom telemetry data based on Prometheus metrics.

2\) [If you haven’t deployed Finout’s Prometheus Metrics Exporter](#option-2-create-telemetry-when-the-prometheus-metrics-exporter-isnt-installed), you’ll need to install it and set up the cron job to collect the required Prometheus usage metrics.

#### Prerequisites

* **Prometheus**: Having a Prometheus cost center configured in Finout—whether it’s a [per-cluster](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-per-cluster-integration) or a [Centralized Prometheus Monitoring](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/centralized-prometheus-monitoring-tool-integrations) setup. S3 Bucket: A designated S3 bucket where Prometheus metrics will be exported.

  <div data-gb-custom-block data-tag="hint" data-style="info" class="hint hint-info"><p><strong>Note</strong>: It is recommended to use the same bucket used for CUR ingestion in Finout.</p></div>
* **Finout Account**: An active Finout account with API access.

## **Option 1: Create Telemetry Using an Existing Finout Metrics Exporter**

If you already have Finout’s metrics exporter set up for collecting Prometheus usage metrics, you can easily extend its capabilities to handle custom telemetry data.

**1. Update Your Existing Exporter’s YAML Configuration**

Incorporate the Prometheus metrics into your existing cronjob’s YAML configuration file.

{% hint style="info" %}
**Important**:

* Adjust your existing cronjob  YAML configuration:  add an environment variable with “QUERY\_” prefix to identify the metric query as custom telemetry data for Finout.-  Make sure the metric values represent cumulative or incremental usage, as Finout aggregates the sum of all samples across the day.
* If the metric is a gauge and a different aggregation is needed (average, count, max), please contact support.
  {% endhint %}

**Example Metric Query:**

`sum(increase(logstash_ingestion_byte_size_total[5m]))`

This query example converts the byte counter into 5-minute increments and sums those increments.\
When Finout rolls the samples up by day, you get the total bytes ingested per tenant for that day.

**Example Cronjob YAML Update (for per-cluster setup):**

This example applies to per-cluster setups. If you're using a Centralized Prometheus Monitoring tool setup, refer to the documentation for your specific monitoring tool for [guidance](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/centralized-prometheus-monitoring-tool-integrations).

```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: finout-prometheus-exporter-job
spec:
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: finout-prometheus-exporter
             image: finout/finout-metrics-exporter:2.0.3
              imagePullPolicy: Always
              env:
  - name: S3_BUCKET
    value: "<BUCKET_NAME>"
  - name: S3_PREFIX
    value: "k8s/prometheus"
  - name: CLUSTER_NAME
    value: "<CLUSTER_NAME>"
  - name: HOSTNAME
    value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
  - name: PORT
    value: "9090"
  - name: QUERY_logstash_ingestion
    value: "sum(increase(logstash_ingestion_byte_size_total[5m]))"


restartPolicy: OnFailure
```

**2.Validate your Kubernetes Integration**\
\
Confirm that your Prometheus Integration is working correctly:\
**S3 Validation**

To confirm that Prometheus metrics are being exported correctly to your S3 bucket:

1. **Navigate to the S3 path**, for example:\
   `s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/`

You should see a list of metric folders, such as: metric=cpu\_requests/

2. **Open a metric folder** and verify that .json.gz files were uploaded. Each file should have a timestamp prefix, for example:\
   `s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gz`

If these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.

**Data Availability**

Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.

{% hint style="info" %}
**Note**: If any issues occur, share your exporter logs with Finout support at <support@finout.io> for further investigation.
{% endhint %}

**3.Notify Finout Support**\
Once the telemetry data is available in your S3 bucket, notify Finout Support at <support@finout.io> with your S3 bucket details and a sample file.\
\
For more information, please see the [FAQs](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-faqs) and [Troubleshooting](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-troubleshooting) section.

## **Option 2: Create Telemetry when the Prometheus Metrics Exporter Isn’t Installed**

If you don’t already have the Finout Metrics Exporter installed, deploy the dedicated CronJob to collect and export your Prometheus telemetry. This setup is intended for cases where you want to export Prometheus telemetry without using the full Kubernetes cost enrichment integration—the CronJob’s sole purpose is to handle telemetry.

If you also plan to use this CronJob for Kubernetes cost enrichment, follow the Kubernetes cost enrichment guide (see [Overview](https://docs.finout.io/kubernetes-integrations/kubernetes)) and use [Option 1](#option-1-create-telemetry-using-an-existing-finout-metrics-exporter) for deployment.

**1. Create a YAML Configuration**

Create a YAML configuration file to specify the Prometheus metrics you want to export.

Important: Prefix custom metrics with QUERY\_ in the YAML configuration to identify them as telemetry data for Finout.

* Make sure the metric values already represent cumulative or incremental usage, as Finout aggregates the sum of all samples across the day.
* If the metric is a gauge and a different aggregation is needed (average, count, max), please contact support.

**Example Metric Query:**

`sum(increase(logstash_ingestion_byte_size_total[5m]))`

This query converts the byte counter into 5-minute increments and sums those increments.\
When Finout rolls the samples up by day, you get the total bytes ingested per tenant for that day.

**Example YAML Configuration:**

```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: finout-prometheus-exporter-job
spec:
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: finout-prometheus-exporter
             image: finout/finout-metrics-exporter:2.0.3
              imagePullPolicy: Always
              env:
  - name: S3_BUCKET
    value: "<BUCKET_NAME>"
  - name: S3_PREFIX
    value: "k8s/prometheus"
  - name: CLUSTER_NAME
    value: "<CLUSTER_NAME>"
  - name: HOSTNAME
    value: "<PROMETHEUS_SERVICE>.<NAMESPACE>.svc.cluster.local"
  - name: PORT
    value: "9090"
  - name: CUSTOM_QUERIES_MODE
    value: "override"
  - name: QUERY_logstash_ingestion
    value: "sum(increase(logstash_ingestion_byte_size_total[5m]))"

restartPolicy: OnFailure

```

**2. Schedule the Cronjob**

Set up the cronjob to run the Prometheus export at a regular interval (usually once an hour). The cronjob will execute the queries and export the resulting data to your S3 bucket.

**3.Validate your Kubernetes Integration**\
\
Confirm that your Prometheus Integration is working correctly:\
**S3 Validation**

To confirm that Prometheus metrics are being exported correctly to your S3 bucket:

1. **Navigate to the S3 path**, for example:\
   `s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/`

You should see a list of metric folders, such as: metric=cpu\_requests/

2. **Open a metric folder** and verify that .json.gz files were uploaded. Each file should have a timestamp prefix, for example:\
   `s3://cur-bucket/k8s/prometheus/prod-cluster/end=20251101/day=5/metric=cpu_requests/1759622400_cpu_requests.json.gz`

If these files appear, the CronJob ran successfully, and Prometheus metric files were generated and stored in the correct S3 structure.

**Data Availability**

Kubernetes cost and usage data will appear across Finout within 48 hours, matching the standard cloud billing data delivery window.

{% hint style="info" %}
**Note**: If any issues occur, share your exporter logs with Finout support at <support@finout.io> for further investigation.
{% endhint %}

For more information, please see the [FAQs](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-faqs) and [Troubleshooting](https://docs.finout.io/kubernetes-integrations/kubernetes/prometheus/prometheus-troubleshooting) section.

**4. Notify Finout Support**

Once the telemetry data is available in your S3 bucket, notify Finout Support at <support@finout.io> with your S3 bucket details and a sample file.

<br>
