CNCF : KEDA (Kubernetes Event-driven Autoscaling) Scaling Kubernetes workloads has traditionally relied on CPU and memory metrics. But what ...
CNCF : KEDA (Kubernetes Event-driven Autoscaling)Scaling Kubernetes workloads has traditionally relied on CPU and memory metrics. But what if your app needs to scale based on event-driven triggers like the number of messages in a queue or database activity? 🤔
That’s where KEDA (Kubernetes Event-driven Autoscaling) comes in!
🔥 Why KEDA is a Game-Changer:
✅ Event-based Scaling: Scale your workloads based on events from 50+ sources (e.g., Kafka, RabbitMQ, AWS SQS, and more).
✅ Efficient Resource Usage: Scale to zero when idle and scale dynamically when events occur.
✅ Simple Integration: Works with Kubernetes HPA to seamlessly handle scaling without the need for custom logic.
Step 1: Prerequisites
1.Amazon EKS Cluster: Ensure your EKS cluster is set up and kubectl is configured to interact with it.
2. Helm: Install Helm CLI for deploying KEDA and Prometheus.
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
3.kubectl: Ensure kubectl is installed and configured.
Step 2: Install KEDA in the EKS Cluster
1.Add the KEDA Helm repository
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
2.Deploy KEDA using helm chart inside the namespace called Keda
helm install keda kedacore/keda --namespace keda --create-namespace
3.Verify the deployment by running get pods command
kubectl get pods -n keda
You should see KEDA components like keda-operator
running.
Step 3: Set Up Prometheus for Metrics Collection
1.Install the Prometheus using offical prometheus Helm chart
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
2.Next Deploy Prometheus helm chart using below command
helm install prometheus prometheus-community/prometheus --namespace prometheus --create-namespace
3.Verify the Prometheus running status using kubectl get command
kubectl get pods -n prometheus
Access the Prometheus UI:
To Access the Prometheus dashboard UI Port-forward the Prometheus service.
kubectl port-forward svc/prometheus-server -n prometheus 9090:80
Checkout the Prometheus metrics by Open the Prometheus UI at http://localhost:9090
.
Step 4: Configure KEDA for Prometheus Metrics
Set Up a ScaledObject: Create a ScaledObject that uses Prometheus metrics for scaling
Example: Scaling based on the number of HTTP requests (custom metric http_requests_total
)
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: prometheus-scaler
namespace: default
spec:
scaleTargetRef:
name: your-deployment-name
minReplicaCount: 1
maxReplicaCount: 10
triggers:
- type: prometheus
metadata:
serverAddress: http://prometheus-server.prometheus.svc.cluster.local
metricName: http_requests_total
threshold: "100"
query: sum(rate(http_requests_total[2m]))
The above content use Yaml file so before applying into the cluster make sure yaml indentation.
Apply the ScaledObject by running kubectl apply command
kubectl apply -f scaledobject.yaml
Step 5: Visualize Metrics and Validate Scaling
1. Access Prometheus Metrics:
Use the Prometheus UI to run queries such as
promql: sum(rate(http_requests_total[2m]))
This will show the current rate of HTTP requests per second.
2. Observe Scaling Behavior:
Use kubectl to monitor the replica count of your deployment
kubectl get hpa
kubectl get podsCheck how the number of pods changes based on the http_requests_total metric.
Step 6: Optional - Install Grafana for Better Visualization
1.Install Grafana using helm chart
helm install grafana grafana/grafana --namespace grafana --create-namespace
2. Access Grafana by portforwarding the service
kubectl port-forward svc/grafana -n grafana 3000:80
Open http://localhost:3000 in your browser (default credentials: admin/admin).
3. Add Prometheus as a Data Source and visualize the metrics in Grafana.
Go to Configuration > Data Sources > Add data source.
Choose Prometheus and enter the server URL
http://prometheus-server.prometheus.svc.cluster.local
Use Grafana to create dashboards for monitoring metrics like http_requests_total and visualize the scaling behavior in real-time.
By following these steps, you’ll have KEDA scaling your workloads based on Prometheus metrics while visualizing and validating the scaling behavior in both Prometheus and Grafana. Let me know if you need more help!
----------------------------------------!!!! Happy Learning with Techiev !!!!!!!!----------------------------------
-------------------------Subscribe our Youtube Channel by clicking the below link---------------------- - -------------------!!
https://www.youtube.com/@techieview729!!------------------------------------