-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Archive home directory using multi-stage build #781
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #781 +/- ##
=======================================
Coverage 68.21% 68.21%
=======================================
Files 45 45
Lines 4147 4147
=======================================
Hits 2829 2829
Misses 1318 1318
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
1832c06
to
6b2ed46
Compare
1a0c3ae
to
c9be311
Compare
c9be311
to
192e558
Compare
@unkcpz @superstar54 This is hopefully the final iteration of the docker build. 😅 For easier review, I've split chunks of this PR into 3 extra pull requests that should be reviewed and merged first: #782, #783, #784 The startup takes slighly less than 10s. More speedup is I think possible:
The size is currently 5.8Gb, but I don't understand where the increase comes from, even though I tried very hard to get rid of it. Let's merge the three PRs first, I'll continue to investigate.... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @danielhollas! The implementation looks super clear, I have one minor request.
@superstar54, you showed interests to learn more docker stuff. I'd say this is a nice PR to read if you have time.
RUN --mount=from=uv,source=/uv,target=/bin/uv \ | ||
uv pip install --strict --system --cache-dir=${UV_CACHE_DIR} . | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also happened in stage 4 but I understand it is not avoidable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. It is unfortunate that we need to install all the dependencies to install qe codes and pseudos. But uv
is so fast and I am re-using its cache that in terms of speed it doesn't matter much.
Dockerfile
Outdated
# STAGE 3: | ||
# - Prepare AiiDA profile and localhost computer | ||
# - Install QE codes and pseudopotentials | ||
# - Archive home folder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stage 2 and stage 3 better to be merged? It says "to run aiidalab_qe CLI commands" then clear to directly run it in the same stage. I believe the finale size will be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. I did this mainly as a logical separation, but it is not needed and might be confusing. I'll merge them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it is beneficial to leave this as a separate stage, because the uv cache can then be immediately used in the final stage, without waiting for the rest of the home_stage build (which is the longest build part). I've rearranged things a bit for better cache utilization. Now, when you modify Dockerfile
and rebuild, it only takes 10s!
Hmm, after rearranging things a little bit, the image size dropped from 5.8Gb to 5.1Gb (compared to 4.1Gb on main), although I have no idea why. 540Mb comes from @unkcpz perhaps you can deploy again to the demo server for testing? |
If you click on a specific tag in dockerhub, you'll see how much size each layer generated: https://hub.docker.com/r/aiidalab/qe/tags I also find this fancy tool to check the detail of image layer size: https://github.com/wagoodman/dive |
0159dcb
to
21a1917
Compare
Forget to mention, the image was redeployed to the Azure and works as I expected. I think we can merge this and from next week I 'll working on hyperqueue integration. |
This reverts commit 21a1917.
Hi @danielhollas, I guess you miss one comment above? |
Hi. I am aware, although I should have been more explicit. Does it bring any benefits of publishing to Dockerhub? Given what you told me about this image being most important for demo server deployment, I think that only publishing to ghcr.io is fine? Publishing to Dockerhub would complicate the GitHub actions workflow, so unless there is a clear benefit I'd advise against it. |
One thing a bit annoy is I can not find what is available tags in ghcr.io registry, since we have a lot images with pr-xx tags and disgest directly. |
Yeah, ghcr.io interface is not great. But with the significantly simpler workflow, you don't really need to search for tags, do you? If you look at the workflow, we don't push by digest or commit sha anymore, only pr-xxx on PRs and edge on main, and version when a new tag is pushed. |
Make sense, I think we want to have highly maintainable repo that involve less outside tools as possible to fit the goal. Would you then mind to add a paragraph to README to tell which tag can be used and is from which branch? Sort of like "Supported tags" section of |
Supersedes #778, hopefully the last iteration!
The main goal here is to reduce the complexity of status quo and of #740.
The strategy of archiving home directory and extracting it at startup allows for a bunch of simplification of the Dockerfile since everything can be directly prepared in home folder, without intermediary steps, and this allows to get rid of the current startup scripts (70_, 71_).
All startup scripts from full-stack are preserved and reused, which minimizes duplication, resolves the SSH key issue and should be more maintainable
The only new startup script is the 00_untar_home.sh which is basically the same here as in #740.
I've done some quick benchmarking, at starting the container takes around 12s on my machine. The image takes around 5.8Gb. We could trade around 300Mb images size for extra 3s of startup time if we compressed the home.tar archive. (My timings seems roughly consistent with those observed in #740.
Reducing the image size will be done in subsequent PR.