-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Containers created from gdcc/dataverse:unstable image do not work as expected #9769
Comments
For now I'm poking at this and leaving some notes here: https://dataverse.zulipchat.com/#narrow/stream/375812-containers/topic/.239769.20unstable.20images @GPortas I believe that during the container meeting I said it's working for me. I confused myself. Without thinking, I had built the image locally with Here's the digest:
It's failing with this...
...so I think a Payara 6 branch got into the unstable image (as we talked about in the meeting). The problematic pushed you linked to came from fedb103 which is a Payara 6 branch. As I said in Zulip, I'm having trouble linking up the sha256 digests though. The path from a push to a broken image on my desktop is not clear to me. |
Anyway, regardless of my inability to trace this thing, is the following a problem?
fedb103 is not a pull request. It's just a push to a branch. But it was problematic? And kicked off a run that pushed to Docker Hub? Do we need to tighten up this logic, perhaps? The line is here:
|
Nice to see we are coming to the same conclusion here. Indeed, this line is the problem. It's kind of easy to fix (simply detect if we are on I would not alter the GHCR login thing to avoid double pushes. One from a PR, one from a push. Does that make sense? |
By ensuring pushes to Docker Hub are only executed for any non-PR events when based on develop or master branch, we avoid breaking the images on the Hub.
By ensuring pushes to Docker Hub are only executed when based on develop or master branch, we avoid breaking the images on the Hub: simply skip the deploy job on wrong pushed to branch.
What steps does it take to reproduce the issue?
Running the Dataverse containerized setup using a freshly pulled unstable Dataverse image through Docker compose. You will notice that the configbaker bootstrapping container will timeout before finishing configuring the Dataverse application.
Note after 08/10/23 containerization meeting:
It seems that an image was wrongly pushed to the Docker registry instead of the GitHub Container Registry. The problematic push seems to be: https://github.com/IQSS/dataverse/actions/runs/5813835717/job/15762398940
When does this issue occur?
On localhost envs and GitHub actions using the containers through the usual Docker Compose setup.
What happens?
Timeout during Dataverse container bootstrapping.
To whom does it occur (all users, curators, superusers)?
Developers and automation processes (GitHub actions).
Screenshots:
Timeout using a localhost Docker Compose setup:
Timeout on GitHub actions:
Run: https://github.com/pdurbin/dataverse-api-test-runner/actions/runs/5820120434/job/15779838142
The text was updated successfully, but these errors were encountered: