-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: docker-compose events #1510
Comments
Hm fancy! If accompanied by some clear examples, this would be a great feature. Given that compose doesn't have a service/daemon. How would this work? (just wondering). Also; will subscribers listen to events for all projects, or receive only events for a specific project? |
I think it should be just the specific project, otherwise it's not really more useful than the existing
I think it would work similar to |
Ah, yes. Makes sense. Count me interested :) |
+1 |
1 similar comment
+1 |
yeah actually, I prefer this concept to plain "allow to run scripts 'before/ after' nonsense" |
Example usage: OnExit (of my web server in a Docker container) run a script to close port 80 on the firewall. |
I started to look into this, but I think to do it properly the docker remote api needs to support filtering by labels. |
@dnephin with filtering, you mean filtering events based on labels? |
@thaJeztah Exactly, https://docs.docker.com/reference/commandline/events/ only supports filtering by image id, container id, or event type |
@dnephin feature requests are welcome 👍 sounds like a nice feature to work on for contributors as well (clear goal) |
Really in need of this feature |
👍 |
@alikor @manuelkiessling I've coded something similar in the JHispster project. The scripts run in a second container:
That's similar to what tools like FlywayDB/Liquibase offer for a SQL database. |
I have to put my +1 and a donation of 2c.... I find with many containers in docker compose on ubuntu, the linux connection tracking can get in the way. Thus, on-start or on-pre-start to execute 'conntrack -F' for me is a must. For now, to ensure OPS get this right, I have to provide a start script, and ask them to avoid running docker compose directly. |
The biggest value of compose is that it is self-contained, if I need to run other tools to setup a deployment, I might as well use a tool that covers everything. |
I agree that a "post-run" mechanism for running provisioning steps would be amazing and would solve a great deal of deployment issues. While it's nice to say "Just build it into your dockerfile." what if I didn't write the dockerfile? What if I'm using a provided container with a set entrypoint and I don't want to have to edit or wrap the upstream dockerfile? The ability to arbitrarily fire off commands post-entrypoint seems like a basic piece of functionality to me. I must admit, I get a little tired of seeing threads like this on github where a whole host of users are telling the developers how useful a basic feature would be only to be met with "do it this way instead." We know our use cases, we understand our needs, and we're pleading with you to provide a simple and highly sought after piece of functionality to simplify our deployments. Why is there so much resistance to this? I get that just because the feature can be considered simple that the implementation of it may not be simple, but come on man. This is something that a great deal docker-compose users obviously have a real need for. |
This has already been solved and closed years ago.There isn't resistance, there's always a more clever better way to do what you're thinking of doing. Also please don't tell the maintainers of a project or repo that you're sick of seeing requests for a simple solution. If it was so simple you should be able to do it yourself. To extend that note not every suggestion or feature even fits any project, it may be a singular need that can and should be solved other ways and breaks configuration or conformity or any other myriad reasons that aren't even specifically technical in nature. You could also just write a bash script with Remember containers are not your VM... |
@relicmelex You need to go through #1809 to get a better idea of what is going on. |
@relicmelex I understand all of that, and I get that a feature that seems simple may, in fact, be very complicated to implement, and may not fit a project, but I commonly see developers arguing against something that dozens and dozens of users are requesting for nebulous reasons. I apologize if I came off as demanding, it is not my intention to make demands of busy developers, though I did intend to express my frustrations about a trend I see among some of the tools I consume. What is the solution? Because I'm still looking for it. If you could point me in the direction of a best practices way to do this, it would be greatly appreciated; maybe it's someplace obvious, but I haven't come across it yet. I have a whole bunch of stuff to build, and guess what, I expect the tools I consume to be able to handle some of these features without taking the time to implement it myself since they often have whole development teams committed to them while I'm here on my own just doing the best I can with what little time I have. If using spawn and expect is doing something wrong, what is the right way to run an arbitrary command on a container after it's running? I'm absolutely amenable to using whatever the correct solution is, if it already exists; it may be that my frustration is simply a lack of google-fu skills (google searches led me to issue #1809 which in turn lead me here) or because I'm not reading some section of important documentation somewhere. I'd definitely appreciate any help you can provide since you seem to be aware of the solution. As I gain a better understanding of these tools, I'm thinking I just need to wrap the source docker container in a dockerfile that includes the final provisioning steps at build time; does that sound correct? If so, I may have been being silly to get so frustrated in the first place. |
@TalosThoren Can you try and lay out what you're trying to accomplish as an end objective and then the steps you're currently taking? Because usually you can just write a script to execute as a step in the container. Maybe as part of the independent Dockerfile(s), or a bash script to run after build... maybe mount the volume on start-up and have it run a script as the CMD option? Lets explore. @omeid I've been through all of that, I stick by what I said... Notice it's been over two years since my original post here about it as this issue came up for me in a different annoying way. And instead of breaking pattern I started using docker-compose in a more structured way and linked some containers to achieve what I was trying to do. It ( whatever it is ) can be done without that feature, I'm sure of it. |
Side note... @systemmonkey42 you may want to use env vars in docker-compose and the hostname of the container if they are linked is the name of the container in the docker-compose file. Maybe that will solve your cross container issues? |
@relicmelex Every feature that compose has can be done without it. The argument that you can hack your way without any feature is pointless. I still think that #1809 was closed unreasonably, @dnephin really wants to promote his tool, dopey or whatever it is. And on the original issue, I will just iterate my quesiton, feel free to answer it @relicmelex.
|
@omeid You're correct compose can be done without, and docker can be done without even computers can be done without... I think you missed my point. I never suggested any hack of any kind, I'm suggesting you use the correct tool for the job. Instead of antagonizing and talking about the problem, try to find a solution. This is just pointless banter now. |
@relicmelex Thanks for following up. My use case, in this instance, is to simply create a table in a cratedb database upon initial deployment for use with the crate_adapter for persisting prometheus metrics. The cratedb service needs to be running already, and I'm pretty sure the nature of cratedb means I only need to do it on the first container to stand up in the cluster. The intention is to write a script that checks if a table exists, after allowing some time for the container to join the cluster using its built in service discovery, and create the table if it does not exist. I may be able to check if the container has been elected a master as the sentinel, or as an additional sentinel for table creation, as well, but I haven't got that far yet, I'm mainly doing manual lab work to ensure I understand the deployment steps presently. I will have to write a dockerfile for the crate_adapter, as they don't presently supply a docker image, but that will be simple. I actually wonder if it would be appropriate to install the I've run into many situations where running an init script of some kind after deployment of a container would be desirable, as well. I think I agree with @omeid that this clearly falls within scope of container deployment and orchestration, but I also see your point that there are probably best-practices ways to implement this kind of thing without incorporating a "run-after" or some such capacity in docker-compose. I think I see both sides of this argument, and I know which one I lean towards, though I may begin to feel differently once I've learned more about implementing this kind of build. |
@TalosThoren Thanks for being so polite, you make me want to help you. I imagine you also want to check to see if that table is already available so you don't accidentally destroy data or just have a failed step? Then create the table, then maybe even seed some data? ( Say it's a credentials table and you need a 'system' type credential so you can always log into a platform ) I'm doing this right now with dynamodb-local & elastic search then hooking services to them in a docker-compose environment, so I'm certain it can be done. My approach is to create multiple docker containers and point to those in my docker-compose instead of just the default docker container. It takes a little more work but it really allows you to customize your environment and it's ability to communicate across containers. docker-compose
In the custom Dockerfiles I use the normal Dockerfile parameters to run the commands to build the environment I'm looking to build, then insert / build the db as one of the steps. If this gets unruly I then turn it into a bash script or many bash scripts with dedicated purposes so that the build can use cacheing if you want to make smaller changes. |
Thanks again @relicmelex, I'll have to think through that, and I may come back with some questions, but that approach gives me a lot to think about. I really appreciate you sharing your expertise with this. |
@relicmelex I wanted to follow up and let you and anyone else who google-search stumbles their way upon this my results. Using your method it proved trivial to create a short-lived container that simply runs a bash script to perform the necessary bootstrapping operations. I simply wrote a script that awaits availability of the containerized service (which happens to be a database) that I need to run bootstrapping operations against before querying the database for the table in question. It logs what it finds, creates what it needs to if it's missing, and exits gracefully. Thanks again for assisting in a long closed issue, it took some outside perspective to get a better grasp on how I should be thinking about containerized code execution. |
@TalosThoren You could come up with a 100 kind of hacks to implement this feature, but a hack is still a hack, you have to explain it to people who use your project instead of expecting it as part of understanding Docker Compose. That is the major difference. When I use docker-compose, I expect my colleagues and collaborators to know or learn Compose, and docker-compose is well documented, this means I don't have to document my hack on every project, nor use some promoware like @dnephin's dopy or whatever it is, that may or may not be documented properly and could be gone any moment without much of a community to keep track of it. You could argue against every single feature, and up to the entirety of docker compose with |
@omeid Just because you don't understand it doesn't make a hack... end of conversation. |
@relicmelex That is a childish reply. I have deployed a very similar hack multiple times and that is the exact reason why I need the |
@omeid, hey man, I'm on your side. I think this needs to be a feature in the docker-compose files, but @relicmelex gave me a solution that I think is quite robust and that will serve me well into the future as I implement work I need done today. I can't wait around for the development team to decide they want to implement something that I'm happy about, I got stuff to build. I'm not convinced this closed thread is the right place to get the development team's attention regarding this feature request, so I don't think it's very productive to continue to argue for it in this particular thread, even though I agree that post-service-launch provisioning should probably be a thing docker-compose supports. I'm less convinced it's critical to prioritize it, though, than I was at the beginning of this conversation, but I still think it's a long overdue feature that has been summarily dismissed for poorly argued reasons. I absolutely agree with your sentiment that "use a bash script" is a bit of a cop-out argument. The fact of the matter is that should we see support for post-service-launch provisioning find it's way into docker-compose, we'll be supplying bash scripts as the provisioners anyway. It could be said that we're simply asking for a more built-in way to deliver and execute those bash scripts in this thread. I definitely consider what I ended up implementing a workaround for missing functionality, but it works well, and it's a solid standard for the time-being. |
+1 |
1 similar comment
+1 |
what about taking advantage of an alias. Still hackish, but solves the issue now add an alias like this place this script in your path somewhere and make it executable (chmod 755 docker-compose-hooked)
You can then do a normal docker-compose build and it will copy your ssh key first. |
In my particular case, something like this would work. But I must point out that, at least for me, the whole idea in having a hook inside the Docker Compose file is to precisely avoid another step that every developer in my team would need to take. Let's assume I create this alias and my problem is solved. Then my developer doesn't follow along and I'm back to square one. If I would be able to add a hook inside docker-compose.override.yml and commit it to my Git repository, that pretty much solves the issue and I'd never have to second guess whether my teams comply with a step-by-step set up your development environment... Anyhow, that is my motivation to adding a plea for this feature. I also need to run stuff on the host machine before/after docker-compose runs. |
From @TalosThoren #1510 (comment) above:
I found this good enough for my use case, just leaving it here as it didn't found an immediate example. I needed to setup an initial solr directory with a specific config schema for an older solr image that needed a mount, so this is what I ended up doing: version: "3"
services:
setup:
image: alpine:latest
volumes:
- ./:/mnt/setup
command: >
ash -c "mkdir -p /mnt/setup/.local/solr/data &&
cp -R /mnt/setup/sites/all/modules/search_api_solr/solr-conf/3.x /mnt/setup/.local/solr/conf"
solr:
image: geerlingguy/solr:3.6.2
depends_on:
- setup
ports:
- "8900:8983"
restart: always
volumes:
- ./.local/solr:/opt/solr/example/solr:cached
command: >
bash -c "cd /opt/solr/example &&
java -jar start.jar" |
@hanoii Uuuhhh!!! I love that. Thank you :) |
There have been a few requests for supporting some form of "hooks" system within compose (#74, #1341).
A feature which runs commands on the host would add a lot of complexity to compose and the compose configuration. Another option is to support these features by providing a way for external tools to run the command triggered by an event.
docker events provides an event stream of all docker events, but would still require filtering.
docker-compose events
could provide a similar interface by filtering the events returned by/events
and return a stream of only the events related to the active compose project. The event stream could also include a new field"service": <service_name>
in addition to the fields already provided by/events
The text was updated successfully, but these errors were encountered: