• Reach us at connect@buildpiper.io

Logo
  • Home
  • Features
    • Microservices Delivery
    • Secure CI/CD Pipelines
    • Managed Security & Observability
    • Managed Kubernetes
  • Resources
    • Documentation
    • Blog
    • Release Notes
    • Walk Through
    • Workshop
    • Podcast & Shows
    • Ebook
    • Case Studies
  • Contact Us

Kubernetes Event Driven Autoscaling

KEDA in the real world Production
  • October 27 2022
  • Vishwas Narayan

Introduction

Some of the top organizations in the world are using KEDA to operate production at a gigantic scale in the cloud based on practically any measure imaginable from almost any metric supplier.

The example given here is in Azure Kubernetes Service, but you can consider using it with Hardened clusters of the Kubernetes as well

In this article, I’ll explain what Kubernetes administrators can do using KEDA (Kubernetes Event-driven Autoscaling) and how to get started.

Here, you’ll also know how Kubernetes autoscaling works and much more! 

Let’s Begin!

The Kubernetes Measurements Server, which is not deployed by default on some Kubernetes deployments, such as EKS on AWS, is used by KEDA to calculate basic CPU and memory metrics. Limits must also be included in the resources section of the applicable Kubernetes Pods (at a minimum).

Prerequisite

  • Kubernetes cluster version 1.5 or above is required.
Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
Kubernetes nodes on Azure
  • Ownership of the Kubernetes cluster’s administrator role -> change the setting in the RBAC with the Azure Active Directory or the IAM role while Kubernetes Cluster on AWS.
  • Installing the Kubernetes Metrics Server is necessary. Depending on your Kubernetes provider, different setup procedures are required.
  • Your Kubernetes Pod setup must have a resources section with set restrictions. See Pods and Containers Resource Management. The missing request for “cpu/memory” problem occurs if the resources section is empty (resources: or something similar).
Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
Azure Kubernetes Service and its Capablities

What is KEDA?

A similar autoscaling method to the built-in Kubernetes Horizontal Pod Autoscaler is called Kubernetes Event-driven Autoscaling (KEDA) (HPA). The HPA is still utilized by KEDA in order to do its magic.

You may access the official KEDA website here. Since KEDA offers excellent documentation, installation is simple. Microsoft is supporting the open-source Cloud Native project KEDA, and Azure AKS (Azure’s Kubernetes service) offers complete support for it. Large organizations like Microsoft, Zapier, Alibaba Cloud, and many more utilise it, therefore it is in use and being produced on a huge scale.

Why KEDA is a Game-Changer?

You can take a reference to the Real-time Application of KEDA in the Real Production

KEDA in Real-world Production Scenario (Sketch by Vishwas Narayan)

KEDA allows Kubernetes to scale pod replicas from zero to any number, based on metrics like queue depth of a message queue, requests per second, scheduled cron jobs, custom metrics from your own application logging, and pretty much any other metric you can think of. The built-in HPA in Kubernetes is unable to accomplish this with ease. The scaling service providers that this Kubernetes event-driven autoscaler (KEDA) supports are listed below.

How KEDA operates?

A glimpse of how Kubernetes Autoscaling works!

In order to grow according to the measure’s value, KEDA monitors metrics from an external metric provider system, such as Azure Monitor. It has direct communication with the system that provides metrics. It functions as a single-pod Kubernetes operator and continually monitors.

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
KEDA in real world Production

How to setup KEDA?

The easiest method of installing KEDA(Kubernetes Event-driven Autoscaling) is to use a Helm Chart

> helm repo add kedacore https://kedacore.github.io/charts

> helm repo update

> kubectl create namespace keda

> helm install keda kedacore/keda --namespace keda
Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
kubectl commands for creating namespaces

This is the installation that you need to do in the Azure CLI or the Azure Powershell

Setting up Scaling

You must create a manifest file to specify what to scale based on when to scale, and how to scale after this Kubernetes event-driven autoscalar i.e KEDA has been deployed and is operating in the cluster. I’ll offer advice on how to configure scaling depending on popular metrics like CPU and memory.

You can get a list of all supported scaling sources and kinds in the Documentation here.

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
Azure Kubernetes cluster and the deployments in kube-system namespace
Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
Deployments in the kube-system namespace

This is the list of the resources in the particular namespace

you can also try deploying a workload before like the one which is given with the Azure Kubernetes Quick starter

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
Deployment of the Azure vote Application

After deploying you will get an External IP:

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
kubectl commands to get the services deployed

Similar to deployments, scaling is described as a manifest YAML file with a ScaledObject as the type.

You can get the working of the existing scaled objects :

> kubectl get scaledobject

And a scaled object can be deleted with this command:

> kubectl delete scaledobject <name of scaled object>

you can also try commands like this to get the name space in the kubernetes:

> kubectl get ns
or
> Kubectl get namesapces
Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
List of all the name spaces created

Let’s talk about scaling infrastructure (basic triggers)

Memory Scaling

Based on the amount of memory used inside the pod’s container, a rule may be configured to scale.

  • You can find the documentation on memory scaling here.
  • you have to do all the helm installs as shown here.
  • These kinds of values can serve as the basis for scaling:
  • Utilization’s goal value, which is expressed as a percentage of the resource’s required value for the pods, is the resource metric’s average over all pertinent pods.
  • The goal value for the metric AverageValue is the average of the measure over all pertinent pods (quantity).

Memory Utilization has been considered for the particular manifest example below:

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
YAML file for the Memeory Utilization Triggers

This example develops a rule that scales pods in accordance with memory use.

The deployment that has to be scaled is referred to by TargetRef.

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
scaled objects with the memory triggers in KEDA

CPU Scaling

The CPU use of the container within the pod may also be taken into account while scaling.

  • These kinds of values can serve as the basis for scaling:
  • Utilization’s goal value, which is expressed as a percentage of the resource’s required value for the pods, is the resource metric’s average over all pertinent pods.
  • The goal value for the metric AverageValue is the average of the measure over all pertinent pods (quantity).

CPU Utilization has been considered for the particular manifest example below:

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
YAML file for the CPU Utilization Triggers

This example develops a rule that scales pods in accordance with the level of CPU usage.

The deployment that has to be scaled is referred to by TargetRef .

KUBECTL get information for the scaled objects
kubectl command get information for the scaled objects with CPU triggers

For more reference you can see this yaml manifest: Here, I have done an explanation about the specifications from “spec”

This will give you an idea about the manifests and also the resource that are considered.

Kubernetes cluster can use the lightweight KEDA, a Kubernetes Event-Driven Autoscaler, which helps to scale applications based on how many events they handle. By offering a capability that allows resources to be scaled on demand, it simplifies autoscaling and reduces costs. Here's more on KEDA, take a look!
Setting the limit of the CPU and Memory Usage

Trigger Information

The CPU trigger those scales based on CPU metrics is described in this standard.

triggers in the metadata and the other information in the YAML files
Triggers in YAML Deescription

List of parameters

  • type – The chosen kind of metric. Utilization or AverageValue are available options.
  • value – Value for which scaling activities should be initiated:
  • Utilization’s goal value, which is expressed as a percentage of the resource’s required value for the pods, is the average of the resource measure over all pertinent pods.
  • The goal number when using AverageValue is the average of the statistic over all pertinent pods (quantity).

Example with the comments as explanation

YAML files for the scaled objects in KEDA
YAML Manifest file for KEDA

Thus, it’s easy to get the utilization ready for the deployment but it’s very difficult to get the functions-based triggers in the world of Microservices.

There are still many more challenges to be addressed.

Conclusion

So, this is how Kubernetes autoscaling works. In this article, I discussed KEDA, how it can make it simple for you to create the most scalable apps for Kubernetes, and how to get started. There are several Kubernetes event-driven autoscalers available, so you may choose any to scale according to your precise needs.

Feel free to comment on this page if you have any queries.

BuildPiper, a popular Microservices monitoring tool as well as a DevOps tools for the Developers, consult our tech experts to discuss your critical business use cases and major security challenges. Schedule a demo today!

 


 

Tags kubernetes apikubernetes clusterKubernetes Cost
Previous Post
Establishing Kubernetes Governance Strategy!
Next Post
Container Scanning and Security Practices For All

Leave a Comment Cancel reply

Recent Posts

  • Accelerating Your DevOps Journey with BuildPiper
  • Balancing Shift Left and Shift Right in Your DevOps Strategy
  • 5 Best Practices for Implementing DevOps Security
  • Mastering Git Branch Deletion: A Step-by-Step Guide
  • How Elasticsearch Works?

Categories

  • Application Modernization 6
  • AWS 1
  • Canary 3
  • Cloud computing 5
  • cluster management 2
  • Containers 8
  • Continues Delivery 8
  • Continuous Deployment 7
  • Continuous Integration 9
  • Deck 2
  • DevOps 45
  • DevOps Monitoring 4
  • DevOps Platform 1
  • DevOps tools 1
  • DevSecOps 8
  • Docker 3
  • Docker Alternatives 1
  • Docker containers vs images 1
  • Docker Hub alternatives 1
  • Docker image vs Container 1
  • Docker images vs containers 1
  • docker run command 1
  • docker versus kubernetes 1
  • Dockerfile 1
  • ECS 1
  • Elasticsearch 1
  • Git 1
  • Git Delete Branch 1
  • GitOps 2
  • Helm 3
  • Helm Charts 4
  • How does Elasticsearch works? 1
  • How to Create a Dockerfile 1
  • how to start docker 1
  • Hybrid cloud 2
  • Ingress 1
  • Istio 4
  • Istio Service Mesh 1
  • kubectl commands 1
  • Kubernetes 40
  • kubernetes challenges 1
  • Kubernetes Security 2
  • Low code platforms 1
  • Managed Kubernetes 5
  • Managed Microservices 5
  • MEME 7
  • Microservices 24
  • Service Mesh 3
  • Shift Left vs Shift Right 1
  • Sketchs 5
  • Yaml 1
  • Yaml File 1

Recent Comments

  • Ruchita Varma on How To Choose A Kubernetes Management Platform That Is Right For You?
  • Ruchita Varma on How To Choose A Kubernetes Management Platform That Is Right For You?
  • Ruchita Varma on How To Choose A Kubernetes Management Platform That Is Right For You?
  • Ruchita Varma on How To Choose A Kubernetes Management Platform That Is Right For You?
  • Ruchita Varma on How To Choose A Kubernetes Management Platform That Is Right For You?

Tags

application containerization application modenization blue-green deployments canary deployment Canary Deployments canary deployment strategy canary release deployment CI/CD cicd pipeline cluster management continuous delivery continuous deployment devops devsecops tools Helm Helm Chart Helm chart in Kubernetes Helm in Kubernetes hybrid cloud architecture istio service mesh K8s kubernetes kubernetes api Kubernetes challenges kubernetes cluster Kubernetes Cost Kubernetes Dashboard kubernetes deployment kubernetes management kubernetes management tool kubernetes monitoring Kubernetes Security Kubernetes security challenges managed kubernetes Managed Microservices microservice architecture microservice orchestration tools microservices microservices application Microservices challenges Monitoring in DevOps monitoring microservices Monitoring tools in DevOps Service Mesh WHat is a Helm Chart?
Shape
Logo

Features

  • Microservices Delivery
  • Secure CI/CD Pipelines
  • Managed Security & Observability
  • Managed Kubernetes

Resources

  • Documentation
  • Release Notes
  • Workshop
  • eBooks and more...
  • Case Studies

Company

  • Blogs
  • Walk Through
  • Podcast & Shows
  • Contact Us

Contact Info

  • India, US
  • connect@buildpiper.io
Twitter
Linkedin
youtube
Github

© Copyright 2021. All Rights Reserved. Buildpiper is a product of Opstree Solutions (a subsidiary of TechPrimo Solutions Pvt. Ltd.)