-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up the existing images in the registry #79
Comments
I opened this issue to think about a better solution for going forward If @sylus and @zachomedia like the idea, we could keep core platform images in the core registry, and then user-images in the semi-disposable registry. |
First can we audit our primary ACR and remove any more developmental containers? Can discuss the ones here we need to drop before actually do ^_^ |
Yep absolutely will :-) I'll sort them into a list and send off to you and Zach prior to removal. Building on my earlier comment, do you know if it's possible to set permissions on users so that they can only push to a prefix of the container registry? For the future I was thinking maybe people other than daaas admins could push to |
So I actually don't have deletion permissions anyway! So gonna leave this info with you @sylus (CC @zachomedia ) My code is attached.
There is also an az purge command, which I figure is how we'll delete the images? But I don't have permissions to use it. I figure we can just pipe the FYI We only have 40gb of space left 😨 |
I removed most of the ones from the to remove list. Think I might eventually want to move the foundational ones to their own ACR but at least some things are cleaned up now. We also need to clean older images from the CI builds. |
Yeah separating them makes sense to me I think. Did you remove the ones in the third JSON file too? I saw some crazy stuff like 100+ versions of Orchard, as well as all our CI builds. |
Deleted, including dangling manifests:
To be continued. |
Er, some of those minimal-notebook-gpu ones removed were recent (August 6th is not June 8th!), but there are no miniminal-notebook-gpu images in use right now. How should we handle the case of more than 5 versions behind, and older than a month, but still in use? Particularly when there are many later but still outdated versions that are not in use?
There are 24 instances of While deleting the image wouldn't in of itself delete a container, I do not know if there's any point (e.g. pod rescheduling?) where the image is expected to still exist or if that's cached somewhere. |
Deleted:
Skipping daaas-constrstarts-geo for now, there's something weird going on with sequential SHAs. TBC |
Per today's standup, I'll keep images that are in use but delete later unused + outdated versions. I'm placing a delete lock on the images in use, and will purge images older than the 5th most recent version, or a month, whichever is later. |
A good 100 GB was restored just from purging Edit: |
We're down to 203GB 😀 This takes care of clean-up based on Blair's scripts. There are still quite a lot of images that aren't tracked because they don't have more than five versions to begin with. I could go through those, but that would mean deleting entire container repos, so I'm not sure what the best way to sort them out would be. |
Great stuff @frazs! I think what you've done so far is sufficient; we can leave the edge cases for another time. We'll need to look into how we automate this, and answer some of your questions around dealing with old images that are still in use. |
CC @sylus
I noticed that we're about halfway through the storage in the container registry. A good chunk of that is probably my fault haha, from experimental images that were very large.
It might be worthwhile to do a cleanup before too long. Some of those images probably operate as root, too.
The text was updated successfully, but these errors were encountered: