The cost of a Kubernetes (K8s) pod can vary depending on various factors, such as the size of the pod, the resources allocated to it, the duration of its usage, and the cloud provider or infrastructure used to run the pod. To accurately calculate the cost of a K8s pod, it is essential to consider the resources consumed by the pod, including CPU, memory, storage, and network bandwidth. The cost of these resources may differ depending on the cloud provider or infrastructure used.
Step 1: Query the Metrics
Finout provides multiple ways to integrate your K8s clusters with MegaBill, including Datadog or with a cronjob that queries your Prometheus. Metrics are gathered from the source frequently, including CPU usage, memory usage, CPU request, memory request, and network usage.
Step 2: Parse the Usage Metrics Per Pod
Once the data is received, it is processed to determine how much of the allocated resources the pod has used per hour. The usage metrics are processed as follows:
Memory and CPU: This is done by identifying the maximum value of either the usage or request during the hour. For example, if the maximum request during the hour is 3 CPU units and the maximum usage is 1 CPU unit, the cost is based on 3 CPU units. If during the hour, the usage was 4 CPU units, the cost is based on 4 CPU units This is calculated for both CPU and memory for each hour and each pod.
Network usage: Finout calculates the total number of bytes per pod (both received and transmitted) per hour.
Step 3: Get the Hourly Rate Per Cloud VM (Node)
One of the unique features of Finout is that your cost data is fetched from all major cloud vendors. Finout can consume, for example, the AWS Cost and Usage Report (CUR) and determine the exact cost of a single node, accounting for special pricing and discounts, such as Saving Plans, Reserved Instances (RIs), Spot Instances and if needed your EDP. This allows Finout to calculate the hourly node cost accurately.
Step 4: Calculate the Pod Costs
To calculate the cost of a pod, Finout uses the hourly CPU, memory, and network allocations (step 2), along with the hourly node cost (step 3), to determine the pod's CPU and memory cost, respectively.
CPU & memory
The hourly node cost is reported as a total cost and represents the cost for CPU and memory resources. The problem is that the cloud provider does not provide a CPU/memory breakdown. To apportion the costs to CPU and memory costs correctly, Finout specifies a configurable CPU/memory ratio for each resource, cluster or namespace.
The following formulas are used for calculating the cost of CPU and memory within the node are used to derive this ratio:
Old Ratio:
The old 50:50 CPU-to-memory ratio for existing accounts is being enhanced to provide better accuracy in cost distribution. This balanced split was ideal for average workloads but could result in inaccurate financial reporting when actual resource configurations differ.
The CPU/Memory pricing of the old ratio is calculated using the following formulas:
1. Hourly Pod Price - This refers to the cost of the pod per hour, which is the total price without distinguishing between memory and CPU usage. It is derived directly from the bill and represents the node price, or the cost of the underlying infrastructure. The following formulas are used for calculating the cost of CPU and memory within the node are used to derive this ratio.
2. CPU Calculation - The data is used to determine how much of the node was allocated to CPU. See the full explanation about parsing the metrics per pod in step 2.
3. Hourly Pod CPU Price - Calculates the CPU cost out of the total node cost, using the allocation ratio, considering the actual time the node was active and in use.
4. Memory Calculation - The data is used to determine how much of the node was allocated to Memory. See the full explanation about parsing the metrics per pod in step 2.
5. Hourly Pod Memory Price - Calculates the Memory cost out of the total node cost, using the allocation ratio, considering the actual time the node was active and in use.
Refined Ratio:
Finout offers an 88% CPU and 12% memory ratio for more precise cost allocation, reflecting real-world usage where CPU is usually more expensive than memory.
The CPU/Memory pricing of the refined ratio is calculated using the following formulas:
These formulas calculate the price of a single CPU or memory unit within the node. To determine the total CPU or memory cost for the entire node, multiply the result by the number of cores or GB available in the node.
Formula Explanation:
Instance Price Per Hour = the hourly price of the node. See the Hourly pod price explanation in the Old Ratio.
CPU Count = number of cores within the pod
CPU Ratio = 88% by default
RAM (GB) = the number of GBs within the pod
Memory Ratio = 12% by default
Note:
- The data is sampled at a one-minute resolution every 30 minutes, with the 30-minute interval being configurable per account.
- This ratio can be changed per account. Contact Finout support for more assistance.
- The refined ratio is the default calculation for all new accounts.
- Accounts existing before November 2024 will have the old ratio, but they can transition to the refined ratio by contacting [email protected].
Network cost
For each node, Finout is able to calculate the data transfer hourly cost, based on the information supplied by the cloud provider. In order to calculate the network cost for each pod, Finout uses the network received and network transmitted metrics per pod, and divides the cost of the node up into the pods based on these network metrics.
Step 5: Calculate the Node-Level Waste (Unallocated)
The unallocated cost at the node level is also calculated, which is the difference between the hourly node cost and the sum of hourly pod costs running on that node. This helps identify any unallocated costs associated with idle or underutilized nodes.
With Finout's comprehensive cost calculation and reporting capabilities, you can gain better visibility into the cost of running K8s pods and optimize your cloud costs effectively. Our CostGuard dashboards provide valuable insights to help you make informed decisions and implement cost optimization efforts for your K8s workloads.
Still need help? Please feel free to reach out to our team at [email protected].