Whitepaper 'FinOps and cost management for Kubernetes'
Please consider giving OptScale a Star on GitHub, it is 100% open source. It would increase its visibility to others and expedite product development. Thank you!
Webinar 'FinOps and cloud cost optimization for ML/AI workloads.' Register here →
Ebook 'From FinOps to proven cloud cost management & optimization strategies'
OptScale — FinOps
FinOps overview
Cost optimization:
AWS
MS Azure
Google Cloud
Alibaba Cloud
Kubernetes
OptScale — MLOps
ML/AI Profiling
ML/AI Optimization
Big Data Profiling
OPTSCALE PRICING
Acura — Cloud migration
Overview
Database replatforming
Migration to:
AWS
MS Azure
Google Cloud
Alibaba Cloud
VMWare
OpenStack
KVM
Public Cloud
Migration from:
On-premise
Acura — DR & cloud backup
Overview
Migration to:
AWS
MS Azure
Google Cloud
Alibaba Cloud
VMWare
OpenStack
KVM

Best practices of Kubernetes cost optimization on AWS

Kubernetes is the open-source container orchestration software that literally took the world of containerized applications by storm in recent years. Successfully holding an overwhelming share of its market, Kubernetes de-facto has established itself as an industry standard for container orchestration.

Best-practices-Kubernetes-cost-optimization-on-AWS

Kubernetes and similar container-based service architectures have fundamentally affected the workflow of IT teams working with testing and deploying software due to a different approach to these processes: now companies, regardless of their size and number of engineers, have the ability to deploy numerous container instances per day almost effortlessly.

However, there is also a flip side of the coin, which is new challenges and issues entailed by the necessity of implementing and maintaining a fundamentally different infrastructure ecosystem. In particular, some of the most sensitive pitfalls are those related to performance – we talked about them in one of our previous articles.

But the popularity of Kubernetes lies, first of all, in the fact that its numerous advantages greatly exceed all the difficulties and disadvantages. One of the undoubted advantages of the software is that it’s cloud-agnostic, which means it suits the vast majority of companies, regardless of what cloud they already use. In this article, we’ll walk you through how to optimize the Kubernetes cost on AWS, one of the most popular cloud platforms, and share some best practices with you.

Kubernetes on AWS

As we all know, Amazon Web Services has numerous solutions for different use cases, and EKS, which stands for Elastic Kubernetes Service, is one of them. Amazon EKS is a managed containerized service for running and scaling Kubernetes applications both in the cloud and on-premises. It was designed and certified as a solution fully compatible with Kubernetes.

That being said, with EKS, AWS can manage the master nodes for you: from creating the master nodes to installing all the necessary applications (including container runtime, Kubernetes master processes, etc.) to automatically scaling and doing backups when needed. All of this makes Kubernetes and AWS a great solution for many and especially small-numbered IT teams to focus on deploying their applications instead of worrying about unnecessary mundane maintenance tasks.

Additionally, Amazon Web Services ensures high availability across multiple AZs, which means Kubernetes clusters can benefit from low latencies. Also, AWS comprises a variety of services, including Simple Cloud Storage (S3) and Relational Databases (RDS) that allow using Kubernetes for both stateless and stateful workloads. It is therefore not surprising that AWS accounts for the major part of all Kubernetes workloads as a result.

Free cloud cost optimization. Lifetime

How to save on Kubernetes cloud costs on AWS

The flexibility and ease of upscaling AWS cloud services, coupled with the power of Kubernetes, make it tempting to overspend, so you won’t go far without proper cloud cost management. With that in mind, let’s discuss what can be done to keep your Kubernetes AWS spend under control.

1. Terminate pods that are not needed at specific times or are no longer needed at all

Some of your instances, like development, testing, staging, together with certain tools used by your developers, are only needed during business hours. Then it totally makes sense to temporarily reduce the number of pods that are used by those instances and applications. In most cases, Kubernetes Downscaler is used for that purpose. It has different settings that allow for scheduling systems to scale in or out at defined times. Some other options, such as downscaling on weekends or forced extraordinary uptime, provide additional flexibility.

As we already said, Kubernetes and cloud services encourage agility and high deployment speed. But working under rapidly-changing conditions means that the environments previously deployed for tests, previews, etc., often remain unclaimed, and no one controls their presence and does not close them. To clean up clusters automatically, Kubernetes Janitor is used oftentimes. With it, you can set time-to-live both for all temporary deployments or for separate resources – after a specified period, they will be automatically deleted. In addition, Kubernetes Janitor allows you to remove unused Amazon EBS (Elastic Block Store) volumes that are easy to overlook and thereby increase Kubernetes costs by hundreds of dollars per month.

2. Use auto-scaling

Amazon themselves define the auto-scaling feature as the cost optimization pillar, and to use this feature, you need to run the Cluster Autoscaling tool. It performs two main functions – firstly, it searches the cluster for pods that do not have enough resources and, having found them, provides additional nodes; and secondly, it detects nodes that have been underutilized and reschedules pods onto other nodes.

3. Control resource requests

Kubernetes sets the load on the CPU and memory through the so-called resource requests. These requests reserve resources on working nodes, but often there is a difference between the requested and actually used resources: an excess reserve, which is also called slack. The higher the slack, the more resources, and, consequently, the money is wasted. The Kubernetes Resource Report tool allows you to see excess resources, as well as find specific places where resource requests can be lowered.

4. Use spot instances for Kubernetes workloads

Many agree that spot instances are the best solution for Kubernetes production nodes. Spot instances are much cheaper than on-demand and reserved instances; you can also reserve such instances for a fixed period of time. As a result, you can get a larger node for less. What’s more, by using the right workload management tools, you can even use spot instances for mission-critical applications as well, which is not recommended to do under other circumstances.

5. Use AWS cost allocation tags

AWS cost allocation tags are metadata that can be assigned to each of your AWS resources so that you can track your AWS costs in detail. Thus, tags can help you manage, identify, organize, search for, and filter the resources you’re using in a tailored way: you can create tags to categorize resources by purpose, owner, environment, or other criteria. Using a sound tagging strategy, you can find out what are the main sources of spending and whether there are any among them that could be painlessly eliminated. We talked about what AWS cost allocation tags exist and how to use them correctly in one of our previous articles.

Conclusion

To get the most out of the combination of Kubernetes and Amazon Web Services fully leveraging its flexibility and scalability, you need to by all means control and minimize costs. In this article, we have given the main ways to reduce the check and talked about native tools that allow you to do this. However, if you have every dollar in the account and you want maximum savings, these semi-automatic means will not be enough. This is where Hystax Optscale has got you covered. Our suite offers a completely different approach to Kubernetes cost management and optimization, which can result in reducing your bill drastically while keeping the same performance level.

Find helpful tips on how to optimize IT costs in a Kubernetes infrastructure or register to OptScale and get dozens of optimization recommendations. 

Enter your email to be notified about new and relevant content.

Thank you for joining us!

We hope you'll find it usefull

You can unsubscribe from these communications at any time. Privacy Policy

News & Reports

FinOps and MLOps

A full description of OptScale as a FinOps and MLOps open source platform to optimize cloud workload performance and infrastructure cost. Cloud cost optimization, VM rightsizing, PaaS instrumentation, S3 duplicate finder, RI/SP usage, anomaly detection, + AI developer tools for optimal cloud utilization.

FinOps, cloud cost optimization and security

Discover our best practices: 

  • How to release Elastic IPs on Amazon EC2
  • Detect incorrectly stopped MS Azure VMs
  • Reduce your AWS bill by eliminating orphaned and unused disk snapshots
  • And much more deep insights

FinOps and cloud cost optimization for ML/AI workloads

Join our live demo on 7th 
May and discover how OptScale allows running ML/AI or any type of workload with optimal performance and infrastructure cost.