3

Kubernetes Costs - How to Save on Your Container Infrastructure

 1 year ago
source link: https://devm.io/kubernetes/kubernetes-finops-container-infrastructure
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

FinOps for Kubernetes

Kubernetes Costs - How to Save on Your Container Infrastructure

05. Jan 2023


Kubernetes was launched in 2014 by Google as a container orchestration platform. Since then it has evolved and helped developers manage application deployments in containers with more automation. This made it easier to develop microservices applications and use cloud-native technologies.

This move to the cloud, containers and Kubernetes has made it easier to scale up applications when they need to grow. However, there is a corollary to this: budgeting. Cloud operates on a ‘pay for what you use’ model, so any expenses have to be paid for. For developers, estimating how much an application will cost to run is an exercise in judgment. For some, their estimates will be accurate and everyone will be happy. For others, their guesses will be inaccurate, leading to higher bills and questions from the finance team. According to the Cloud Native Computing Foundation FinOps For Kubernetes report, only 38 percent of developers could accurately predict their cloud spending to within 10 percent. When 68 percent of those using Kubernetes reported that their costs had gone up, this is a growing problem.

FinOps is an approach that helps developers understand and manage costs, using the same skills that they already use in DevOps and applying them to finance and costs. FinOps aims to get engineering, finance, technology and business teams working together so that any spend delivers the maximum amount of business value.

Getting to the root of the problem

This overall approach is great to define the wider issue that exists around cloud and cost management. However, it is harder to translate that ideal approach into actual decisions around how to save money without some hard data to work with. For developers, this can be problematic.

When you first set up your containers, you can decide what is included within that base image. This will include elements that you want to see included every time that a new container is created. Alternatively, you can choose a container image from a public repository and then use that when you need that particular service. When you call on the image and put it into run-time, that container will be created and consume a certain amount of compute and memory. At this point, you will be charged.

There are a few potential problems that might crop up here. The first is that you over-provision your base container images. Imagine that you allocate [value] to your base image, based on what you think workloads will need. This might be ideal for some workloads, but other workloads would need far less to function. Each time you create that new container, you would be charged for the full container image amount, even if it only ever needs less compute. With tens, hundreds or thousands of new container images created over time, that differential between what you originally estimated and what you actually need will soon add up to a large cloud service bill.

Similarly, if you under-estimate the amount of CPU and memory that your containers need, then you will end up creating more containers within each cluster to fulfill your workload requirements. Each of these images will add to the overall bill.

In order to prevent these kinds of problems, you have to get insight into how your containers actually perform with your workloads and applications. Based on this, you can then take action to right size your containers and images to save on costs.

Taking practical steps around containers and cost management

In order to get this data, you have to analyse what your containers are actually doing and how well they use the resources that they have. This provides you with historical data that can act as a baseline. Using this, you can then make decisions on where your estimates are good and where you can make efficiency savings. Typically, developers can get some insights from their cloud providers, but this will not provide any specific insight into how your Kubernetes implementation and containers are performing.

Instead, you will have to enrich that data to get insight into those workloads are really performing. Using this Kubernetes context data, you can get a line-by-line overview of costs by clusters and workloads. This should create a unified view that tells you utilisation and performance, as well as how much you are spending to achieve those goals.

This information can then be used in different ways to help with your costs. The first is to spot potential problems in context. Using data on containers, namespaces and clusters, you can look out for any trends in usage over time, and how this compares to your initial estimates. If you are scaling up considerably faster than your predicted costs allowed for, then you can look into why.

The second step here is to optimise your approach. You can achieve this by right-sizing your container images so they have the right level of compute and memory resources, and by updating your configuration to match your historical trend data. This includes making it easier to make those changes in your repository using an Infrastructure as Code approach.

The third step is to look at assigning your costs to specific projects using a showback or chargeback model. With a showback approach, you can simply provide a more accurate overview of costs to the team involved, so they can understand their costs. With a chargeback model, you can provide a more granular overview of expenses incurred and put those costs against specific projects. Chargeback models are more advanced, but they are more likely to lead to people taking action as there is money involved. According to the CNCF, around 13 percent of companies use showback, while 14 percent use chargeback models to assign costs.

Developers don’t normally sign up to become cloud cost accountants. As you are already responsible for building and running the applications and services that companies rely on, you may not want to look into financial management too. However, having this data can make your existing projects more efficient and remove some of the costs associated with delivering that service. With tougher economic conditions around the world to consider, cutting costs is something that everyone should care about.

Using your data, you can accurately demonstrate what your application infrastructure costs to deliver a service. More importantly, you should be able to see where you can make savings through changing the amount of resources you allocate to container images. Based on our research, the average amount that developers can save is 40 percent on their spend, but we have seen savings of up to 80 percent in some cases.

Harry Perks
Harry Perks

Harry has worked at Sysdig for over 6 years, helping organisations mature their journey to cloud native. He’s witnessed the evolution of bare metal, VMs, and finally Kubernetes establish itself as the de-facto for container orchestration. He is part of the product team building Sysdig’s troubleshooting and cost offering, helping customers increase their confidence operating and managing Kubernetes. Previously, Harry ran, and later sold, a cloud hosting provider where he was working hands on with systems administration. He studied information security and lives in the UK.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK