-
Notifications
You must be signed in to change notification settings - Fork 30.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
privileged port issue on Raspberry Pi devices in CI #36847
Comments
@nodejs/testing @nodejs/docker @nodejs/build |
Given that this is blocking landing any code changes, I'd recommend marking this a flaky test on that platform for now and investigating separately. |
Per rvagg: ``` Persistent failure, even after restarts of the whole cluster. #36478 was merged into this test yesterday but the parent commit still has the failures. What has changed is the Docker version. They all got an upgrade to 5:20.10.2~3-0~raspbian-buster and this is all running inside containers. It's going to be the newest version of Docker running in our CI and I wonder whether we're going to see similar failures when we upgrade other hosts or if this is going to be restricted to ARM. Other than that, I'm not sure what this could be. It seems like a straightforward test that shouldn't fail, maybe Docker has introduced something new for unprivileged port binding inside containers? ``` Signed-off-by: James M Snell <jasnell@gmail.com> PR-URL: #36850 Refs: #36847 Reviewed-By: Michael Dawson <midawson@redhat.com> Reviewed-By: Mary Marchini <oss@mmarchini.me>
@Trott when you log in to a Pi you're in to the actual machine, not a container. The way we run these machines is a bit complicated due to a combination of resource constraints, security and the need to run multiple OS versions for testing. So we have multiple Docker containers running full-time on each of the Pi machines when they start, then when we do a CI run, we set it up on the machine, then delegate into a container to take over and run the rest of the script (build and test). But we can duplicate that behaviour by copying how it runs it. As user So here's how that looks:
then
... In a separate session:
i.e. it's running, and hasn't exited. I can Ctrl-C the original session and it stays running, I have to I think this validates my original guess about the cause? But it also raises questions: is this new Docker behaviour or limited to the ARM or Raspbian version(s)? If this is new behaviour, what are we going to do about it when the new version makes it to our other Docker hosts. I was going to reprovision a couple of our main Docker hosts the other day that run our containered tests because they're getting a bit long in the tooth without a full image upgrade, but I suspect if I did that we might encounter the same thing there. Perhaps this is new Docker behaviour but there may also be a way to disable it. Needs some research. |
Guessing this is it: https://docs.docker.com/engine/release-notes/#20100, under "Security" for the 20.10.0 @ 2020-12-08 release:
From that moby change (emphasis mine):
I think if we start the containers with |
Yep, that does it, tried on a machine which is currently offline by adding that flag to the
Will move to a nodejs/build PR to propose this as an option. I'm guessing that there probably won't be an objection to this for now, although it does suggest some subtle changes in the way we're supposed to view "privileged ports" on Linux into the future. |
Since Docker 20.10.0 @ 2020-12-08, port binding has been made unrestricted. This change undoes that by ensuring that <1024 are privileged. Node.js' test suite assumes that binding to a lower port will result in a privilege failure so we need to create an environment suitable for that assumption. Ref: nodejs/node#36847
This reverts commit a45a404. Solved by marking ports <1024 as privileged on Docker containers. Ref: nodejs#36850 Ref: nodejs#36847 Ref: nodejs/build#2521
Since Docker 20.10.0 @ 2020-12-08, port binding has been made unrestricted. This change undoes that by ensuring that <1024 are privileged. Node.js' test suite assumes that binding to a lower port will result in a privilege failure so we need to create an environment suitable for that assumption. Ref: nodejs/node#36847
Per rvagg: ``` Persistent failure, even after restarts of the whole cluster. #36478 was merged into this test yesterday but the parent commit still has the failures. What has changed is the Docker version. They all got an upgrade to 5:20.10.2~3-0~raspbian-buster and this is all running inside containers. It's going to be the newest version of Docker running in our CI and I wonder whether we're going to see similar failures when we upgrade other hosts or if this is going to be restricted to ARM. Other than that, I'm not sure what this could be. It seems like a straightforward test that shouldn't fail, maybe Docker has introduced something new for unprivileged port binding inside containers? ``` Signed-off-by: James M Snell <jasnell@gmail.com> PR-URL: #36850 Refs: #36847 Reviewed-By: Michael Dawson <midawson@redhat.com> Reviewed-By: Mary Marchini <oss@mmarchini.me>
This reverts commit a45a404. Solved by marking ports <1024 as privileged on Docker containers. Ref: #36850 Ref: #36847 Ref: nodejs/build#2521 PR-URL: #36884 Refs: #36850 Refs: #36847 Refs: nodejs/build#2521 Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Daijiro Wachi <daijiro.wachi@gmail.com> Reviewed-By: Ash Cripps <acripps@redhat.com> Reviewed-By: Luigi Pinca <luigipinca@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Rich Trott <rtrott@gmail.com> Reviewed-By: Michael Dawson <midawson@redhat.com>
This reverts commit a45a404. Solved by marking ports <1024 as privileged on Docker containers. Ref: #36850 Ref: #36847 Ref: nodejs/build#2521 PR-URL: #36884 Refs: #36850 Refs: #36847 Refs: nodejs/build#2521 Reviewed-By: Colin Ihrig <cjihrig@gmail.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Daijiro Wachi <daijiro.wachi@gmail.com> Reviewed-By: Ash Cripps <acripps@redhat.com> Reviewed-By: Luigi Pinca <luigipinca@gmail.com> Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Rich Trott <rtrott@gmail.com> Reviewed-By: Michael Dawson <midawson@redhat.com>
CI is red until the privileged port issue we're seeing on Raspberry Pi devices is sorted out.
The tests that are failing are test-cluster-shared-handle-bind-privileged-port and test-cluster-bind-privileged-port.
@rvagg suspects a Docker update:
I logged into one of the machines over SSH and ran
python -m SimpleHTTPServer 80
to see if this was a case of "oh well, ports less than 1024 don't require root anymore" like we saw a while back on Mojave, but nope. Couldn't bind to port 80 (or 42) as an unprivileged user.The text was updated successfully, but these errors were encountered: