Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make it possible to re-use active runners for a few workflow runs #4

Open
machulav opened this issue Dec 23, 2020 · 5 comments
Open
Assignees
Labels
enhancement New feature or request

Comments

@machulav
Copy link
Owner

machulav commented Dec 23, 2020

Notes

  • in this case, it makes sense to terminate the runner using an idle timeout (e.g. terminate the runner if it doesn't do any job during the last 10 minutes)
@machulav machulav added the enhancement New feature or request label Dec 23, 2020
@vroad
Copy link

vroad commented Dec 31, 2020

How could this be implemented? Something like Cluster Autoscaler?

https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html

To make this work in cluster autoscaler way, you need to set up autoscaling groups and serverless app that terminates idle nodes.

Or, you could create cloudwatch alarm that scales out ASG when a SQS queue has pending messages, and scales in when SQS becomes empty for a while.

non-fio queue could deliver the same message twice , so FIFO queue would work better.

@vroad
Copy link

vroad commented Dec 31, 2020

Calling this action with stop mode is no longer required if we use those methods?

If we create lambda function that periodically watches runner, stop action is useless.
SQS message's retention period can be short as 60 seconds, we could use that for emptying the queue, but setting too short value might terminate instance too early. Or, we could consume message manually and use retention as fallback, in case stop fails?

@machulav
Copy link
Owner Author

machulav commented Jan 7, 2021

@vroad thank you for your ideas!

I thought about a bit different solution:

  • At the beginning of your workflow, you sill run the ec2-github-runner action. But in addition to the current parameters, you specify the number of EC2 instances you need to handle the workflow and the label, which will be assigned to each runner. Then, you use the label in the following jobs the workflow to run them on the newly created runners.
  • At the beginning of the execution, the action will check how many runners are already active with the specified label and create the rest if required.
  • When the action starts a new EC2 instance, some special code can be run, which monitors the active processes on the EC2 instance. If the EC2 instance is middle longer than some specified time, it can terminate itself and deregister the self-hosted runner on GitHub. However, this is the most unclear thing in this solution and should be verified properly.

In such a way, you should be able to gain the following benefits:

  • re-use runners for a few workflow runs;
  • at a consequence, save time at the beginning of some workflow runs as the runners will be already created;
  • the idle runners will be removed automatically;
  • for each workflow, you can specify different configurations and use different runners.

Does it make sense?

@vroad
Copy link

vroad commented Jan 8, 2021

  • When the action starts a new EC2 instance, some special code can be run, which monitors the active processes on the EC2 instance. If the EC2 instance is middle longer than some specified time, it can terminate itself and deregister the self-hosted runner on GitHub. However, this is the most unclear thing in this solution and should be verified properly.

To reliably stop idle instances, the monitoring program should run outside of the instance.
Otherwise if the instance become unresponsive for some reason, it won't terminate.

If the instance is in ASG, unhealthy instances will get terminated, and new instances comes up as long as desired capacity is bigger than 0.

AWS doesn't always mark unresponsive instance as unhealthy, though.
To stop such instances you'll need custom health check. To save cost we can't keep ALB running, perhaps? So the only option left to us is custom lambda-based health check. Instances that does not report status correctly should be terminated.
Could be done without ASG, but no replacement instances come up.

@machulav machulav self-assigned this Mar 16, 2021
@jpalomaki
Copy link
Contributor

jpalomaki commented Jul 18, 2021

Just a random thought: would it be possible to use a mix of scheduled and workflow-run event-triggered GitHub workflows to manage the pool of self-hosted runners (using ec2-github-runner action to start/stop them)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants