-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added trusted PyPI publishing #7616
Conversation
Very nice! Looks like: I've been using trusted publishing in my other projects (although not with environments). I recommend we also set up trusted publishing to TestPyPI, so we know the whole pipeline is working before the big release day. I've already registered Pillow at https://test.pypi.org/project/pillow/ and have set up trusted publishers on that side: One sticking point: You can't publish a file to [Test]PyPI that has already been published there, unless the version number changes. But we have a hardcoded value in One way is to use https://github.com/pypa/setuptools_scm/ to generate a version number based on the "distance"/number of commits since the last tag. We'd also need |
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
To make sure I'm following - this is only a problem with test.pypi.org, because we may want to submit the same version more than once, yes? You're not suggesting that anything about the normal version mechanism needs to be changed. |
50d2c74
to
2c62533
Compare
Yes and no :) No, both TestPyPI and PyPI behave the same in this regard. The restriction really is that filenames cannot be re-used. So we can re-use version numbers, but need to For example, we have a
And so yes, we may submit the same version more than once, but need to make it unique in other ways.
Correct. For normal releases, we still have the same version numbers. This would give us new version numbers for testing uploads of intra-release builds to TestPyPI. If we look at https://github.com/hugovk/norwegianblue (now uses Hatchling+hatch-vcs, but used to use setuptools+setuptools_scm and both work in the same way (I think hatch-vcs uses setuptools_scm under the hood).
Although there would be one change: Right now, after we release 10.2.0, we bump the hardcoded version to 10.3.0.dev0. With setuptools_scm, we'd still be getting a version like 10.3.0.dev1, 10.3.0.dev200, etc, because it's based on the last tag. I think that's fine, it's an internal change. This does however bring to mind a potential blocker: we do compile the version into the C layer, and error if they're different. With setuptools_scm, the devX is incremented for every commit, which could result in mismatches or needed to rebuild C more often.. This could be a blocker, and we may need another solution to give a unique filename for TestPyPI... |
We could relax the condition to allow 10.3.0.dev1 or 10.3.0.dev200, by changing Line 84 in e5ee034
to if not hasattr(core, "PILLOW_VERSION") or re.sub(
"dev[0-9]+", "dev0", __version__
) != re.sub("dev[0-9]+", "dev0", core.PILLOW_VERSION): |
Yep, that could work. |
I've just registered for test.pypi.org with the same username. Would you mind adding me as a maintainer to Pillow? |
Invited! |
Thanks. I'm wondering if the use of setuptools-scm to generate unique version numbers each time is overkill? Once we have it working, isn't it unlikely to break in the future? Meaning that for whatever infrequent testing we need to do, we could just branch off from main and manually increment the version to dev1? |
If possible, I'd prefer to automate testing where possible rather than do it manually. Doing it manually means we have to foresee a change might cause problems in the release pipeline. And as said, we can consider that in a separate PR, and can consider other approaches too. |
.github/workflows/wheels.yml
Outdated
- uses: actions/download-artifact@v4 | ||
- name: Merge artifacts | ||
run: | | ||
for artifact in macOS_x86_64 \ | ||
macOS_arm64 \ | ||
manylinux2014_musllinux \ | ||
manylinux_2_28 \ | ||
windows_x86 \ | ||
windows_x64 \ | ||
windows_arm64 \ | ||
sdist; do | ||
mv $artifact/* . | ||
rmdir $artifact | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try something like scikit-build/cmake-python-distributions#438 / scientific-python/cookie#335 to avoid repeating the names:
- uses: actions/download-artifact@v4 | |
- name: Merge artifacts | |
run: | | |
for artifact in macOS_x86_64 \ | |
macOS_arm64 \ | |
manylinux2014_musllinux \ | |
manylinux_2_28 \ | |
windows_x86 \ | |
windows_x64 \ | |
windows_arm64 \ | |
sdist; do | |
mv $artifact/* . | |
rmdir $artifact | |
done | |
- uses: actions/download-artifact@v4 | |
with: | |
path: all | |
- name: Merge files | |
run: | | |
mkdir dist | |
mv all/*/* dist | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I've pushed something similar.
.github/workflows/wheels.yml
Outdated
with: | ||
name: dist | ||
name: ${{ matrix.artifact_name }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we re-use existing matrix variables instead of introducing a new artifact_name
?
The tricky bit might be differentiating between these two:
- name: "manylinux2014 and musllinux x86_64"
os: ubuntu-latest
archs: x86_64
artifact_name: "manylinux2014_musllinux"
- name: "manylinux_2_28 x86_64"
os: ubuntu-latest
archs: x86_64
build: "*manylinux*"
manylinux: "manylinux_2_28"
artifact_name: "manylinux_2_28"
If we do wheel-${{ matrix.os }}-${{ matrix.archs }}-${{ matrix.manylinux }}
, does that error or do we get something like this?
wheel-ubuntu-latest-x86_64
wheel-ubuntu-latest-x86_64-manylinux_2_28
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've pushed a commit to use ${{ matrix.os }}-${{ matrix.archs }}-${{ matrix.manylinux }}
.
We get
- macos-latest-x86_64-
- macos-latest-arm64-
- ubuntu-latest-x86_64-
- ubuntu-latest-x86_64-manylinux_2_28
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Since my last suggestion, and based on lots of feedback about the breaking @v4
changes, GitHub have just added pattern:
and merge-multiple:
inputs:
actions/upload-artifact#472 (comment)
Shall we try that? We'll need a common prefix/pattern for the sdist and all the wheel artifacts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for finding that. I've updated the PR.
Moreover, the "gate" was implemented incorrectly: https://github.com/marketplace/actions/alls-green#why. |
.github/workflows/wheels.yml
Outdated
with: | ||
name: fribidi | ||
name: fribidi_${{ matrix.arch }} | ||
path: fribidi\* | ||
|
||
sdist: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest building wheels from sdist? This is what pip does — it doesn't build wheels straight from Git source, but from the tarballs. And this is what pypa/build does if you don't pass any args. Doing so will ensure that sdists are actually usable, as a smoke test.
Here's an example: https://github.com/aio-libs/yarl/actions/runs/7258064074/workflow#L124.
P.S. cibuildwheel
accepts a path to the sdist tarball natively (in place of a directory path) so integrating this into the wheel build jobs is as easy as downloading the artifact and feeding the sdist from it into the cibuildwheel
action.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, and that repo also features the wheel build jobs moved into a reusable workflow and called with different params.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest building wheels from sdist?
We're currently using the .ci
directory in the build process, which isn't included in the sdist
Pillow/.github/workflows/wheels.yml
Lines 65 to 67 in 41e45b5
- name: Build wheels | |
run: | | |
python3 -m pip install -r .ci/requirements-cibw.txt |
Sure, we could checkout the repository AND use sdist - but that doesn't sound very neat to me.
I know you said that cibuildwheel
can simply accept an sdist path, but we're also currently adding in extra test images, which wouldn't be quite so simple when using a compressed sdist.
Pillow/.github/workflows/wheels.yml
Lines 99 to 103 in 41e45b5
- name: Checkout extra test images | |
uses: actions/checkout@v4 | |
with: | |
repository: python-pillow/test-images | |
path: Tests\test-images |
Oh, and that repo also features the wheel build jobs moved into a reusable workflow
If wheels.yml gets much longer, I could see that being nice, but at the moment, I don't think it's problematic.
Also, neither of these ideas depend on this PR. If you would like to see them become part of Pillow, you could create a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're currently using the
.ci
directory in the build process, which isn't included in the sdist
And this is a reason to include it. Downstreams might want to run tests using the sdist as the source.
If wheels.yml gets much longer, I could see that being nice, but at the moment, I don't think it's problematic.
For me, it's a structural thing that helps layer components. Not necessarily related to how big the YAML files are (although, that's also important).
Also, neither of these ideas depend on this PR. If you would like to see them become part of Pillow, you could create a separate PR.
Sure. These are mostly drive-by comments I made having seen some familiar faces and tech involved.
The argument being that a "skipped" status is incorrectly accepted, yes? This does not appear to be the case for Mergify, which is the only use of these gates here. A recent PR, #7608, only updated documentation and so most of the gates were skipped. However, the Mergify report did not accept these skips as a success: https://github.com/python-pillow/Pillow/runs/19444375622 |
Yeah, you're right. Mergify treats these statuses better. But overall, on the GH platform level, it might still be reasonable to implement them correctly so that somebody doesn't trip over this implementation quirk in the future. |
.github/workflows/wheels.yml
Outdated
with: | ||
name: dist | ||
name: ${{ matrix.os }}-${{ matrix.archs }}-${{ matrix.manylinux }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name: ${{ matrix.os }}-${{ matrix.archs }}-${{ matrix.manylinux }} | |
name: ${{ matrix.os }}-${{ matrix.archs }}${{ matrix.manylinux && '-' }}${{ matrix.manylinux }} |
Would get rid of the extra trailing -
; not sure if it's worth sacrificing a bit of readability to make the output names cleaner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also be something like
name: ${{ matrix.os }}-${{ matrix.archs }}-${{ matrix.manylinux }} | |
name: >- | |
${{ | |
format( | |
'{0}-{1}{2}{3}', | |
matrix.os, | |
matrix.archs, | |
(matrix.manylinux && '-' || ''), | |
matrix.manylinux | |
) | |
}} |
(not tested)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My preference would be in the middle, with
${{ matrix.os }}-${{ matrix.archs }}${{ matrix.manylinux && format('-{0}', matrix.manylinux) }}
but I have a feeling that would make no one happy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My preference would be in the middle, with [...] but I have a feeling that would make no one happy.
Actually, that looks like the best option to me also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks, I've pushed a commit for that.
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
https://github.com/python-pillow/Pillow/actions/runs/7324548352/job/19948625585 But we have that at the job level. Maybe it's because we have an env configured at TestPyPI. I'll remove that, and restart the job. |
Hmm, same thing. Removed the first one and restarted... |
Nope, same thing. Perhaps it's as @webknjaz said in one thread, you can't publish from a PR. |
PRs don't have access to most secrets (otherwise they could be trivially extracted by a malicious PR). From https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#using-secrets-in-a-workflow:
And
If I understand this correctly, the required setting can only be enabled for private repositories (where only members of the organization can create PRs). |
Yes, PR privileges are lower. At least, when the PRs are coming from forks. Some of them can be addressed through |
No, my action doesn't access that token. GHA sets two special env vars for OIDC when |
I recommend keeping the environment. Just make sure it matches what you set up on the PyPI side. I suggest using |
Perhaps we should spell it out in the error message that GH does allow OIDC in untrusted event contexts, like cc @woodruffw |
That's what I thought, but I got confused by the GHA documentation. |
Just to provide some disambiguation:
So TL;DR: you can publish to PyPI from a Edit: For some additional context on this GitHub behavior, see sigstore/sigstore-conformance#55 -- it's a different project, but the outcome (can't get OIDC tokens from 3p PRs) is the same. |
IMO this is likely to cause some confusion, although I think there's a variant we can chip out per #7616 (comment): if the publish is running in a
|
Thanks for clearing that up! That's all fine, we don't want to deploy from PRs in normal use, it was for testing. So I think we can merge this pretty much as-is, test from Another option: @radarhere, if you like, we could try things out from your fork. Feel free to add a new publisher for that at: https://test.pypi.org/manage/project/pillow/settings/publishing/ Our next quarterly release is in just a week, 2nd January, so I'd prefer to just get something merged and working first to minimise risk. This will also be the first release with cibuildwheel, plus replacing |
Ok, sure. I added a publisher, and merged this to my main. There was already a partial release for 10.2.0.dev0, so I updated the version to 10.2.0.dev1 and it worked - https://github.com/radarhere/Pillow/deployments / https://test.pypi.org/manage/project/pillow/release/10.2.0.dev1/ |
Great! Did you test environments? Let's merge this, and can you arrange with a repo admin to set up what is needed in this repo, and on PyPI? Or for them to give you and me access to do it. |
Co-authored-by: Ondrej Baranovič <nulano@nulano.eu>
I tested it with an environment, yes.
|
This reverts commit fc1cf9f.
@woodruffw that sounds reasonable! |
This specializes the token retrieval error handling, providing an alternative error message when the error cause is something that we know can't possibly work due to GitHub's own restrictions on PRs from forks. PR #203 Closes #202 Ref python-pillow/Pillow#7616 Co-authored-by: Sviatoslav Sydorenko <webknjaz@redhat.com>
Helps #7390