Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Workload Autoscaler #1735

Closed
tomkerkhove opened this issue Apr 9, 2021 · 10 comments
Closed

Kubernetes Workload Autoscaler #1735

tomkerkhove opened this issue Apr 9, 2021 · 10 comments

Comments

@tomkerkhove
Copy link
Member

tomkerkhove commented Apr 9, 2021

Proposal

When workloads on Kubernetes depend on each other, they typically need to scale accordingly.

This helps to avoid flooding downstream services because of scaling out other components.

For example - Workload A processes a queue and interacts with workload B. Workload A can autoscale based on queue depth and workload B on CPU. But in some scenarios, you want to add 1 instance for workload B for every 4 instances of workload A

Scaler Source

Number of instances of another workload that depends on the current workload

Scaling Mechanics

Add 1 instance for your workload, for every n instances of another workload.

Authentication Source

Kubernetes service account

@JorTurFer
Copy link
Member

Any news about this? We have one use case where this option could help. In our case, we have one "RabbitMQ" hub which scale depending on the queues, but that hub does calls to other services. Now we are scaling them in base of the CPU this could improve our scaling rules taking in consideration the CPU and the amount of instances in the hub.
If you still think that this is useful, I can try to implement it during the summer :)
WDYT?

@tomkerkhove
Copy link
Member Author

My use-case is similar indeed and we are open for contributions so that would be great, thank you @JorTurFer!

@JorTurFer
Copy link
Member

Hi @tomkerkhove
I will try to do it, it's a good task to improve my Golang :)

@JorTurFer
Copy link
Member

JorTurFer commented Aug 2, 2021

I'm starting this and I have a doubt about it.
What could be better, use match selectors and count the pods directly or use the controller (deployment or statefulset) to get the child pods?
I don't have a strong opinion, but I guess that is more powerful to use selectors instead of the controller.
WDTY?

@tomkerkhove
Copy link
Member Author

I would personally rely on the controller. What about you @zroubalik ?

@zroubalik
Copy link
Member

Selectors on the other hand could be more generic - we can match mutliple Deployments 🤷‍♂️

@JorTurFer
Copy link
Member

Selectors on the other hand could be more generic - we can match mutliple Deployments 🤷‍♂️

We can match multiple Deployments or workloads in general, not only Deployments, that is the reason why I'm doubting...

@zroubalik
Copy link
Member

Selectors on the other hand could be more generic - we can match mutliple Deployments 🤷‍♂️

We can match multiple Deployments or workloads in general, not only Deployments, that is the reason why I'm doubting...

Yeah, that's what I thought.

@JorTurFer
Copy link
Member

Okey,
If there isn't a strong opinion about it, I think that I will take the approach of match selectors. They can bring exactly the same capabilities as using controller, but also can another capabilities if you need them.
How do you feel with this approach @tomkerkhove ?

@tomkerkhove
Copy link
Member Author

Sounds good to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants