24

Suppose I have a RabbitMQ instance and a set of pods that pick messages from RabbitMQ and process them. How do I make Kubernetes increase the number of pods as the queue size increases?

(I'm mentioning RabbitMQ, but that's just an example. Pick your favorite message queue software or load balancer if you wish.)

Likk
  • 747
  • 3
  • 7
  • 8

5 Answers5

15

The top-level solution to this is quite straightforward:

Set up a separate container that is connected to your queue, and uses the Kubernetes API to scale the deployments.

There exist some solutions to this problem already, but they do however not look like they are actively maintained and production ready, but might help:

Kenneth Lynne
  • 15,461
  • 12
  • 63
  • 79
  • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/low-quality-posts/18514873) – pirho Jan 14 '18 at 20:35
  • Thanks for the comment, rectified now! – Kenneth Lynne Jan 14 '18 at 21:29
  • I had same problem, read this post, I was able to use onfido, on AKS, worked great. (Didn't work out of the box though, but wasn't hard to debug, RBAC was missing patch statement, and it didn't like loading RMQ username and password from secret for some reason.) – neoakris Dec 25 '18 at 00:15
7

You can use KEDA.

KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

It supports RabbitMQ out of the box. You can follow a tutorial which explains how to set up a simple autoscaling based on RabbitMQ queue size.

reflex0810
  • 908
  • 12
  • 13
1

You could use this tool https://github.com/XciD/k8s-rmq-autoscaler

It will create a pod on your cluster that will watch your deployments and scale it accordingly to their configuration.

You then just need to add some annotation into your deployments and it will be watched by the autoscaler

kubectl annotate deployment/your-deployment -n namespace \
    k8s-rmq-autoscaler/enable=true \ 
    k8s-rmq-autoscaler/max-workers=20 \ 
    k8s-rmq-autoscaler/min-workers=4 \ 
    k8s-rmq-autoscaler/queue=worker-queue \ 
    k8s-rmq-autoscaler/vhost=vhost
XciD
  • 2,537
  • 14
  • 40
0

You can write a very simple controller that watches the queue size for your specific application and then changes the number of desired replicas of your replication controller / replica set / deployment.

The built-in horizontal pod autoscaling is soon gaining support for custom metrics, but until then this is pretty simple to program/script yourself.

Robert Bailey
  • 17,866
  • 3
  • 50
  • 58
-1

You can use Worker Pod Autoscaler (WPA):

https://github.com/practo/k8s-worker-pod-autoscaler

RabbitMQ support is pending though. But can be added.

Alok Kumar Singh
  • 2,331
  • 3
  • 18
  • 37