-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make activity check method configurable #43
Comments
@lpeabody yeah, I've been running into this myself recently. Unfortunately, there is no good way of knowing that something is happening inside cli, since no logs will be output there unless there is a failure or an access log entry produced by It's not documented, but you can use We could add another label to disable/change the global hibernation policy, but I'd rather want to figure out a way to know whether something important is going on inside |
Being hibernated is less of an issue for us, as our infrastructure permits preserving sandbox environments between pipeline runs (we use Bitbucket Pipelines, and yes docksal/ci-agent#39 is still very much on my radar!). The reason for preserving is that, at the end of the day, it saves us build minutes. First use case (without preserving):
Second use case (with preserving):
So, permanent will at least guarantee I'm never deleting that sandbox from the file system, keeping the benefits from the second use case and speeding up the build fairly significantly, and essentially only ever having to make sure we run The only downside from this I see is that, well, the sandbox is permanent and someone is going to have to remember to physically clean up that sandbox and wipe it from the server when the project is sunset. I'd rather not depend on people to consciously remember to do this when projects wrap up, because I know most people won't remember to do it ;) If the goal is to allow vhost-proxy to reasonably identify sandboxes that should be cleaned up over time, then this is a use case that will need to be solved for. Happy to contribute where I can in this space, as this is a feature I badly want to see working as I fear becoming a pack rat otherwise :) |
Could you add a log entry in |
Looks like an interesting approach, wonder if this is something that can be experimented with? |
It is possible to forward stdout and stderr output from a command executed inside
|
So, I don't think we'd want to stream the output of every This can either be done in fin OR we could add a shim script inside I'm leaning towards the 2nd option, as that would move the complexity from fin to individual images. This way, |
It will be enough to just put in logs |
This would still require refactoring in service-vhost-proxy/bin/proxyctl Lines 141 to 145 in 2b442cc
We may need to introduce another label, which identifies containers within a project, which should be considered when determining project status. |
A couple of scenarios I've come up with (all in regards to designating a specific service for being monitored by vhost-proxy, this does not cover updating Scenario 1: Make activity monitored service configurable (any service)I think ideally we would add a label such as In vhost-proxy, refactor the filter method to use The dilemma here is that Docker Compose does not make it easy to attach labels to arbitrary service names during Scenario 2: Designate either
|
@lpeabody I feel like there should be a single primary container in the stack that is used to determine the stack activity status. ATM, that is whichever container has the following two tags:
service-vhost-proxy/bin/proxyctl Lines 141 to 145 in 2b442cc
In the past, this used to be by name - You can assign any container in your stack these two labels and it will be picked up for monitoring by the See our We could replace the two tag combination with a single tag (e.g. Now, I'm not sure what's going to happen if you have more than a single "primary" container in your stack defined using either of those approaches. This needs to be tested. |
To be clear, I'm not advocating for more than one container to represent whether or not the project is active. My thinking was that it makes sense to have a single container determine project status (aligned with your thinking above), indicate that container with a meaningful label name, and filter on that label, with the caveat that The reason I'm thinking that I see the cleanup logic kind of going like this: In
And
TL;DR: Definitely think |
ok, that makes sense. I would even remove This way, |
Currently the calculation for active and dangling projects is based on the logs generated by containers with the virtual host label.
Our CI setup acomodates two use cases:
In the case of 2, the web host never generates any logs. None the less, we are using the CLI container constantly, but after the dangling period has elapsed, the project's containers are removed, even though we're actively using the CLI container for the project.
It would be nice to be able to configure cleanup policy on a per-sandbox basis, and choose between cli service and web service activity as the method for determining project inactivity.
The text was updated successfully, but these errors were encountered: