-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Root and Targets #410
Conversation
Signed-off-by: GitHub <noreply@github.com>
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verifying rekor.json...
Success! Signatures valid and threshold achieved
rekor version 4, expires 2023/03/27Verifying root.json...
Contains 0/3 valid signatures from the current staged metadata
Contains 0/3 valid signatures from the previous root
root version 5, expires 2023/03/27Verifying targets.json...
Contains 0/3 valid signatures from the current staged metadata
targets version 5, expires 2023/03/27
I compared the keys in this ceremony to those in the prior ceremony with diff to ensure they have not changed as we're not adding new keyholders:
% diff -ur ceremony/2022-09-27/keys ceremony/2022-07-12/keys
% echo $?
0
we're also not adding new targets to ceremony/2022-09-27/repository/targets in this PR:
% diff -ur ceremony/2022-09-27/repository/targets ceremony/2022-07-12/repository/targets
% echo $?
0
The diff of the json files in ceremony/2022-09-27/repository is interesting.
There's some canonicalisation differences that we were expecting. Less expected, to me at least, is that ceremony/2022-09/27/revocation.json has changed to increase expires and add the length
of revocation.list. I'm surprised to see this change in the repository/ directory and not the staged/ directory, is that right? We should increment the version in revocation.json to account for the changes too.
The staged root.json enables consistent snapshots, but the staged files are not version prefixed. Is that expected? Are they version prefixed once they move from staged/ to repository/ ? Apologies for not having a better grasp of the mechanics of root-signing/go-tuf yet.
There's a rekor.json staged that doesn't list any targets. I don't think that role and
it's metadata should be introduced in v5? The delegations
field of targets.json does
not list it.
Finally, that null
in the roles field caught my attention. null
is valid JSON, but it isn't clear from the TUF spec how implementations should handle an optional field (delegations.roles) being null
. Not something to block this PR but worth keeping an eye on as it rolls out and clients start to use it.
% ./verify repository --repository ceremony/2022-09-27 --staged
STAGED METADATA
Outputting metadata verification at ceremony/2022-09-27...
Verifying root.json...
Contains 0/3 valid signatures from the current staged metadata
Contains 0/3 valid signatures from the previous root
root version 5, expires 2023/03/27
Verifying targets.json...
Contains 0/3 valid signatures from the current staged metadata
targets version 5, expires 2023/03/27
Verifying rekor.json...
Success! Signatures valid and threshold achieved
rekor version 4, expires 2023/03/27 |
Impressive review @joshuagl !
That was discussed here: https://github.com/sigstore/root-signing/pull/407/files#r981715741 |
Thanks! That's the detail I was missing – this is an old role, not a new one. It's empty because it's no longer delegating, but we have to keep listing the role in snapshot metadata to prevent rollback attacks (per 5.5.5 of the spec). |
Some partial comments:
Correct! Only occurs on committing the metadata files.
So this change occurred between the root / targets signing in July to now: #327 because it broke rust tough and we re-signing the delegation. What's interesting unfortunately is that because it was a JSON marshaling change, it didn't increment version!! I think if payload differs, even due to JSON serialization changes, a version should have changed? WDYT? EDIT: Should have changed: theupdateframework/go-tuf#239
This is a by product of "removing" the target in go-tuf: the target didn't exist in the current config, but go-tuf has no way of knowing that this delegation is not used. I will also check on this, and whether it would end up being committed in the end.
Also will check! |
Agreed, payload changing should absolutely cause a version increase. |
Few updates:
Changing
|
😌 awesome, glad we caught this. We should (after this ceremony) how to catch issues like this with automation. We already have the tuf_client_tests workflow, but that only runs on changes to repository/** not to ceremony/** – the incomplete metadata in the ceremony makes testing less straightforward, but it should be possible to have automation complete the remaining steps of the ceremony in order to test the generated metadata?
Thanks for the quick fix!
I'll try to keep an eye out for PRs later this evening. |
I completely agree, and have really struggled to figure out how to do this. With go-tuf, I can verify just signatures on staged metadata by creating my own DB outside of a valid TUF client. If I can pick apart a module from python-tuf similarly, that would be great. In order to automate complete steps, we'd need access to the GCP & HSM signers: the way I see it, one thing we CAN do is use some test-signers and run a whole ceremony like this:
It's a little convoluted, but the only way we can test workflow operations as a "dry-run" using similar production metadata with test signers. |
"sha512" | ||
], | ||
"keyval": { | ||
"public": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEEXsz3SZXFb8jMV42j6pJlyjbjR8K\nN3Bwocexq6LMIb5qsWKOQvLN16NUefLc4HswOoumRsVVaajSpQS6fobkRw==\n-----END PUBLIC KEY-----\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry to rehash this for the millionth time, but: at what version did theupdateframework/go-tuf#357 make it into Cosign? Seems like sigstore/cosign#2232 and v1.12.0
(14 days ago, as of the time of this comment).
What happens to anyone on a version below v1.12.0 when this change goes out? Won't they choke?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding was that we wouldn't produce PEM-encoded keys for a little while longer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens to anyone on a version below v1.12.0 when this change goes out? Won't they choke?
Yes, but we wanted to do this before GA and notified people of updates with the security fixes.
I'd rather not push it out after GA: without it, we still aren't TUF spec compatible.
I could backport sigstore versions, but we don't have release version branches in sigstore right now, and I'm not sure why people wouldn't update
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can make an issue in kyverno
about backporting their cosign version update explicitly, but are any other clients releasing versioned releases that aren't keeping track of their cosign or sigstore libs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather not push it out after GA: without it, we still aren't TUF spec compatible.
I totally agree.
I could backport sigstore versions, but we don't have release version branches in sigstore right now
I don't think that fixes it. I'm less concerned about people who are unwilling to update to a new major version (though that sometimes happens for good reasons) and more concerned about people who just...haven't updated it.
I'm not sure why people wouldn't update
2 weeks is a pretty aggressive timeline IMO. I wonder whether we have any user agent data on this.
For instance, looks like the Arch package is still at v1.11.1
. Also, people pin their dependencies and don't update immediately: for instance, the v1.11.1 GH action, v1.11.0, etc. I personally have gone >2 weeks between apt-get updates
🤷
I'm not trying to put my foot down and stop this change, I just want to make sure it's explicitly called out and acknowledged by the keyholders and maybe the TSC before merge; right now, the PR description doesn't acknowledge that it's a breaking change.
I'd also like to point out a few alternatives. For instance, we could maintain two parallel TUF repos and keep producing old-format keys for a little while longer while pointing new Cosign builds at the new repo by default. That'd buy us some time to switch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. I wonder whether we have any user agent data on this.
Yeah! I have thought about this: I thought about creating mirrors for each major version so we can detect which versions of sigstore people are using to access TUF. From the TUF root perspective, since it's a public GCS bucket we can't get any monitoring directly on that though.
I'd also like to point out a few alternatives. For instance, we could maintain two parallel TUF repos and keep producing old-format keys for a little while longer while pointing new Cosign builds at the new repo by default. That'd buy us some time to switch.
Wouldn't this introduce issues with rollback attacks? I could try "splitting" repositories, but then merging them again would be an issue: clients on the old repository who want to migrate to the new one are going to have issues when they realize the versions of the snapshot did not increment necessarily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought fwiw I gave keyholders and TSC notice of this :/
Regarding updates: short of manually scraping and updating, I can post an advisory notice in sigstore as a reference issue anticipating the problem for people who have not bumped or have dependabot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shibumi: Wasn't sure of a better channel to reach you on, since I can't post issues here. Any reason why arch is still distributing 1.11? I see archlinux/svntogit-community@b50eee2 bumped it and the pkg is marked as out-of-date
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the TUF root perspective, since it's a public GCS bucket we can't get any monitoring directly on that though.
Oh, that's annoying. I wonder if we moved it over to a proper CDN....but that can be a "for later" thing.
Wouldn't this introduce issues with rollback attacks? I could try "splitting" repositories, but then merging them again would be an issue: clients on the old repository who want to migrate to the new one are going to have issues when they realize the versions of the snapshot did not increment necessarily.
It kinda sounds like TAP-14 to be honest—I don't think there are rollback issues here, you should be able to cut over whenever. I think it just postpones the problem, to be clear. But we can keep doing it indefinitely.
Sounds like a big pain though.
I thought fwiw I gave keyholders and TSC notice of this :/
You totally might have! I'm just in neither category so I didn't see it 🙂 And I didn't see it on the PR either, just trying to err on the side of overcommunication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@asraa sorry, totally missed your notification, because I just started a new job at Google :)
Santiago updated cosign over 10 days ago: 2022-09-29 15:08 UTC. So this is fixed, I would assume :) , I don't know what happened. Maybe, I just forgot to trigger the release. Arch Linux has still no automated release pipeline :(
Closing for re-initialization |
Initializes a new root and targets