-
Notifications
You must be signed in to change notification settings - Fork 787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Digest from push --digestfile
can not be used to look up the local image
#3866
Comments
A better workaround for now is to use skopeo This way I am not pulling the whole image back so it is less network load, but is still a bit less secure than using digests my builder computed. Not 100% sure if this is a bug, or me misunderstanding some critical buildah concept. Could be some sugar service is helping me in ways I don't understand :) |
@btrepp I think digest generation strategy of buildah is quite different from docker and the one on your local is uncompressed digest while on remote it is changed since its compressed on push and there is no prior way of knowing that. There is a similar issue discussed here #2034 (comment) I'll tag @mtrmac @nalind here since they can comment on this better and the design part of this.
|
Thanks for the reply. I can appreciate there may be some intricacies in the different systems. What is suprising, is that docker is consistent with this, e.g the locally computed digest is the same as what gets sent over when using docker build. Strangely with the above, buildah pull is smart enough to know remote and local are the same, and doesn't pull it, but I am not sure why. If there is a difference, e.g in compression on the registry, but not locally, would it be difficult to compress locally too (perhaps as an extra flag)?. If docker cli itself is 'stable' on the digest, it must be compressing locally then as well. This would be great from a CI perspective, we can build images using buildah (which is fantastic, and faster than docker), as the CI server can then report what digest it made, and then anything it updates can read the same digest from the registry. Thus we know it's the exact same container we intended to get. My workaround above technically would have some race conditions, as it's getting the 'latest' digest, rather than the one just built. I switched to buildah as it was stable on digests between multiple runs (when supplying the timestamp flag). This is great!, but the missing link is stable/consistent on push. |
https://stackoverflow.com/questions/67062271/are-docker-image-digests-really-secure This may be the divergence, perhaps docker doesn't have a digest until push happens, buildah is at least computing something. The question is whether it would be possible to get that digest back in buildah push |
This might work right now, but it’s also fundamentally incorrect. The digest depends on the exact compressed representation (which is only being created on push, hence unavailable before pushing) — but the compression implementation could well create a different representation on different runs, changing the digest. The only way to determine the digest is to create the compressed version, and and see what the digest was. Usually, that’s a push, although some other workflow might be possible (e.g. push to a localhost-running registry, or to In particular, see
I’m afraid I have no idea what this means; the “expected/actual results” sections don’t contain any specific data. |
@mtrmac I think @btrepp meant that digest given by @btrepp Is this what you meant by this "is not very helpful in the instance" ? |
I don’t know the details, but I vaguely remember someone saying that recent versions of Docker do store the compressed representations locally (in some cases?). So that’s possible.
It deduplicates using the config blob digest, if any.
It’s not something that just exists right now; locally-stored images only exist as extracted filesystems, there isn’t a natural place to store a compressed representation. You might be able to do that using
As long as the images are being compressed, there is technically no guarantee that the compression will produce consistent results over time. (It’s much more likely that the uncompressed representation, including the config blob digest mentioned above when talking about deduplication, is going to be consistent, although I can’t say whether it is guaranteed either, without more research.) |
Oh, that might be true.
@vrothberg ^^^ it might make sense for |
Sounds good to me. @flouthoc, are you interested in opening a PR in c/common? |
@vrothberg I'll take it thanks. |
A friendly reminder that this issue had no activity for 30 days. |
@flouthoc Status? |
From what I gather, if buildah is used to push the images, it should be possible for the digest to be the same in the registry. |
Nothing about the API of the compression implementations promises them to be consistent from one execution to another. It’s nice if they have that property, but it’s just not a part of the contract right now, and so it’s not something we can build a reliable publishing pipeline around (assuming we don’t constrain ourselves to a specific compression implementation that does make that promise, but that would put us at the mercy of that implementation being maintained, and prevent us from adopting any possible future compression formats that don’t have that property. Efficient compression implementations are very non-obvious projects requiring specific expertise, and we did replace a compression implementation at least once in the past, so maintaining flexibility, at least to the extent of being able to use the standard library’s implementations, is quite valuable.) Besides, the compression is frequently the most expensive part of the push; so it’s generally desirable to engineer a workflow that doesn’t require compression to happen twice. In a Buildah system, that means compressing during push (or, frequently, not compressing because the registry is found to already contain a compressed version), and only determining the digest based on the outcome of the push. Sure, #3866 (comment) would make that workflow easier, and it’s something we should do. |
@flouthoc, ping |
…push Upon successful push now libimage will commit the remote digest on localstorage as tag example: `repo:remote-digest` This is needed as when a push operation is performed libimage does not acks successfully created digest on remote-repo so after this commit users would be able to perform image operations on the just commited digest, as `repo:remote-digest` will point to the same image which was just pushed to remote repo. Closes: containers/buildah#3866 [NO NEW TESTS NEEDED] : Not sure if we have any tests for remote registry push on c/common Signed-off-by: Aditya R <arajan@redhat.com>
…push Upon successful push now libimage will commit the remote digest on localstorage as tag example: `repo:remote-digest` This is needed as when a push operation is performed libimage does not acks successfully created digest on remote-repo so after this commit users would be able to perform image operations on the just commited digest, as `repo:remote-digest` will point to the same image which was just pushed to remote repo. Closes: containers/buildah#3866 [NO NEW TESTS NEEDED] : Not sure if we have any tests for remote registry push on c/common Signed-off-by: Aditya R <arajan@redhat.com>
Edit: No we need new API at c/storage, going discussion there. |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
This needs API in |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
I think i just stumbled onto this problem. When i do a I am wondering why the digest is not part of a well defined specification for images because i thought it is. We do not have problems with other tools. |
push --digestfile
can not be used to look up the local image
@mtrmac PTAL |
This already links to various PRs and issues that are prerequisite to making this work. |
Buildahs images seem to use a digest strategy that doesn't align with dockers. This makes it difficult to get the digest to use in other images.
If you try to get a local built buildah digest out, it changes when pushed, the only way to get the 'registry' digest, seems to be to push, delete your image, and then pull it back. This seems fairly wasteful.
In my CI servers, I would like to get the imagedigest from the buildah build, so that this can be used to update further container configurations, to ensure they point to the exact image just built.
It does appear that registries may change this digest, this is fine, but how can I make buildahs inspect report the digest with the full name, so that it is valid in other tools?. Currently the digest would be completely unique to your own machine, so you have to 'leave' your machine to get the actual digest, that will work in other buildah/podman/docker/kubernetes instances.
Steps to reproduce the issue:
buildah images --digests
docker images --digests
(these are completely different)buildah images --digests
-> unchanged.The workaround at the moment is
build rm registryxyz/myconatiner
, and then pulling it again. Which seems a bit awkward.Also the --digestfile argument, prints out the 'unuseable' digest, so is not very helpful in the instance
Describe the results you received:
Buildah uses it's own image digests
Describe the results you expected:
digest in buildahs local install and on remote registry is the same. This expectatation has come from that this is the case when building using docker
Output of
rpm -q buildah
orapt list buildah
:Output of
buildah version
:Output of
podman version
if reporting apodman build
issue:Output of
cat /etc/*release
:Output of
uname -a
:Output of
cat /etc/containers/storage.conf
:The text was updated successfully, but these errors were encountered: