Logo of Keda.

Recently during the Microsoft //build 2019 conference, KEDA was announced as a component bringing event-driven containers and functions to Kubernetes.

KEDA allows for fine grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.

Architecture diagram showing the different components of Keda in a Kubernetes cluster.

You could find 6 samples to get started:

I gave the JavaScript Azure Functions + Azure Queue sample a try, I felt how easily I could host an Azure Functions triggered when an item is added in an Azure Queue and leverage Kubernetes resources to scale according to the load I need to process all items in the Queue. I won’t explain here how I did that since it’s very straight forward by following the instructions of that sample. But here below are few different variants I did and tried:

# Only install keda and not osiris for the purpose of this Azure Queue (not HTTP) sample:  
func kubernetes install \
    --keda \
    --namespace keda  
# See all the k8s resources deployed by the previous command:  
kubectl get all,customresourcedefinition \
    -n keda  
  
# Deploy my hello-keda sample on a specific k8s namespace:  
func kubernetes deploy \
    --name hello-keda \
    --registry <your-docker-id> \
    --namespace hello-keda  
# See all the k8s resources deployed by the previous command:  
kubectl get all,ScaledObject,Secret \
    -n hello-keda  
  
# Watch the pods, deployments and hpas moving while adding items in the Azure Queue (you could even open 3 panes with [tmux](https://en.wikipedia.org/wiki/Tmux)):  
kubectl get pod -w  
kubectl get deploy -w  
kubectl get hpa -w  
  
# Build, tag and deploy a specific version and decoupling the build from the release:  
docker build . -t <your-docker-id>/hello-keda:1  
docker push <your-docker-id>/hello-keda:1  
func kubernetes deploy \
    --name hello-keda \
    --image-name <your-docker-id>/hello-keda:1  

I also learned few stuffs around the AzureWebJobsStorage setting/secret:

  • It’s the key and connectionstring to access the Azure Queue, we don’t want it stored in the Docker image nor in the Git repo, right? That’s where the .dockerignore and .gitignore play an important role. The .dockerignore exclude the copy of the local.settings.json file into our Docker image when building the Docker image, that’s what we want, perfect!
  • This AzureWebJobsStorage setting is stored as k8s Secret and then used but the k8s Deployment

Some gotchas:

  • KEDA is a single modular component that is trying to do one thing well: provide event driven scale.  It has no dependencies and works with any Kubernetes cluster. 
  • KEDA is directly looking at the event sources (for examples messages in an Azure Queue) and scale up pods  based on the outstanding “events” (for examples length of an Azure queue)
  • KEDA doesn’t use the Azure Functions runtime. It is an independent component that can scale up any Kubernetes deployment based on events. If the Kubernetes deployment happens to target a Functions container then that will be scaled out. 
  • There is some tooling in Azure Functions Core Tools (func kubernetes) to be able to easily deploy a Functions container which is scaled through KEDA but again the components are independent.
  • Osiris and Virtual Kubelet alongside with HPA and KEDA look really promising by bringing the concept of Serverless containers on Kubernetes.

More resources:

Hope you enjoyed this blog article and this new (and experimental for now) open source project bringing more capabilities and more workloads on/with k8s.

Cheers! ;)