-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bump CNI to v0.6.0 #446
bump CNI to v0.6.0 #446
Conversation
seems fine to me; not sure if there's someone who understand this better, though. |
@ixdy +1 Let's ship it! |
/assign @mikedanese |
@mikedanese @luxas @ixdy - Can we please ship this? |
cc @timothysc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @dixudx, LGTM
However, we can't merge this yet as we need to figure out how to make k8s v1.8 NOT pick up this change...
@luxas Seems we can't restrict installing |
@luxas So what's the plan here? |
That's just a spec file version dep change >= 0.6.0 . |
It's flat. :( |
There hasn't been a need so far to branch this repo. |
cb13cf8
to
44e1af7
Compare
Thanks @david-mcmahon for the link. @luxas @mikedanese Updated. PTAL. Thanks. |
debian/build.go
Outdated
if sv.GTE(v190) { | ||
return fmt.Sprintf(">= %s", cniVersion), nil | ||
} | ||
return fmt.Sprintf("< %s", cniVersion), nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we pin to exactly 0.5.1 for k8s <v1.9? We probably don't want this to move the next time we bump cniVersion
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reasonable. Agree.
curl -sSL --fail --retry 5 \ | ||
"https://dl.k8s.io/network-plugins/cni-{{ .Arch }}-$(CNI_RELEASE).tar.gz" \ | ||
| tar xz | ||
"https://dl.k8s.io/network-plugins/cni-plugins-{{ .Arch }}-$(CNI_RELEASE).tgz" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we change CNI_RELEASE
to CNI_VERSION
?
rpm/kubelet.spec
Outdated
@@ -24,12 +24,12 @@ Source1: kubelet.service | |||
Source2: https://dl.k8s.io/v%{KUBE_VERSION}/bin/linux/%{ARCH}/kubectl | |||
Source3: https://dl.k8s.io/v%{KUBE_VERSION}/bin/linux/%{ARCH}/kubeadm | |||
Source4: 10-kubeadm.conf | |||
Source5: https://dl.k8s.io/network-plugins/cni-%{ARCH}-%{CNI_RELEASE}.tar.gz | |||
Source5: https://dl.k8s.io/network-plugins/cni-plugins-%{ARCH}-%{CNI_VERESION}.tgz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a typo in CNI_VERSION
.
rpm/kubelet.spec
Outdated
|
||
|
||
BuildRequires: curl | ||
Requires: iptables >= 1.4.21 | ||
Requires: kubernetes-cni >= 0.5.1 | ||
Requires: kubernetes-cni >= 0.6.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this need to be pinned for k8s <1.9 like for debs?
15985ec
to
36977e5
Compare
if sv.GTE(v190) { | ||
return fmt.Sprintf(">= %s", cniVersion), nil | ||
} | ||
return fmt.Sprint("<= 0.5.1"), nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a little weird to have this constant floating here, but otherwise this looks fine to me.
Hey @dixudx, I'm taking a look now, but I'm hoping you can comment on how much testing this has gone through. Is the current expectation that with this PR, previous versions of Kubernetes should have their CNI versions untouched, and upgrading/downgrading with 1.9.x works as expected? Also, can we use the updated scripts to cut prerelease debs/rpms against 1.8.x and 1.9.x so that we can have independent verification that everything works as expected? Who would be the right person to QA the new packages from a networking perspective? Thanks! |
@dixudx In order to better understand the flow, I cloned your branch locally, but am actually seeing errors when trying to build RPMs for either 1.8.0 or 1.9.0. This build happens in a container, so the failures should be reproducible, and are certainly consistent in my local testing. If I leave
I think that the changelog warning is a red herring, and the real failure is around If I change
Again, the changelog failure looks like a warning, and this time the error is from the |
I was able to overcome the 1.8.x build errors with a local patch to move the
Even if we semantically wanted to move everything from the CWD into /opt/cni/bin, the CWD contains all of the other source files, like the kubeadm/kubectl/kubelet binaries, configuration files, etc., not just the contents of the cni-plugins archive, so this seems doubly-broken. I'll see if there's a better way to extract the archive to a stand-alone directory, or else we can just enumerate all of the individual files we want moved into |
Okay, I believe I have workarounds for the rpm build problems, and will open another PR with my patches for this diff. I still think it's worthwhile to set up test clusters using the 1.8 and 1.9 artifacts to vet that they're doing the right thing. |
Another issue I found: when building the rpms for Kubernetes 1.8.0, even though it looks like it pulls the old version of CNI, it will still name the output rpm |
I've created #486 to build on top of this PR and fix the rpm build problems I found. To view just my changes, see its second commit. I would still like to do some cluster end-to-end tests using the OS packages before the real release, though. |
@pipejakob Thanks for your work. |
I tried building and installing the debs using 1.9.0-beta.2, but get an error:
I think the |
@pipejakob This will help solve the problem. But I am not sure whether we should keep this in the later official release. keep open on this. For such prerelease versions, this seems to be the only way to fix it. |
It also seems that if you try to build debs for older releases, it will always build CNI 0.6.0 instead of 0.5.1. I believe the kubelet dependency on kubernetes-cni is still correct, so I'm not sure how important it is to be able to build kubernetes-cni 0.5.1 on HEAD (since the 0.5.1 version is already published in our stable Debian repository). |
Since we've already shipped kubernetes-cni 0.5.1, IMO there is no need to build that on HEAD. |
@enisoc reminded me about a problem that was brought up earlier on this PR: preventing 1.8 users from accidentally picking up kubernetes-cni 0.6.0, since the old debs/rpms give the range ">= 0.5.1". TL;DR: I think we should build new OS packages for older releases using the new, corrected CNI dependency, change that dependency to always be "=" instead of ">=" or "<=", and make sure we publish them before the 1.9 release. Even if we rebuilt old 1.8.x artifacts to fix the cni dependency to use "= 0.5.1" or "<= 0.5.1", I don't think we can overwrite them in the repository in such a way that client-side caching is going to pick up the new definitions for package versions it already knows about. I'll do some testing to verify that. It also seems a little dirty to overwrite previously-published versions anyway. However, we can roll forward by publishing new versions with a new package revision and optionally deleting the old ones (although I don't think that's really necessary). How about this plan: we build new debs/rpms for old < 1.9 releases using a new package revision (e.g. if we already published Then afterward, we publish the 1.9.0 and CNI 0.6.0 OS packages. If my understanding is correct, this will leave no window in time where a full system OS package upgrade would erroneously pick up kubernetes-cni 0.6.0 while leaving kubelet < 1.9, and a user would have to go out of their way to get into a bad state. These are the two cases I think that could leave a user in a bad state:
I think (1) is unlikely because we have kept CNI frozen for so long, I can't imagine someone specifically wanting to upgrade CNI while keeping Kubernetes untouched. It is much more common to do I also think that none of this would have been necessary if our dependencies used |
@pipejakob That plan SGTM. |
@pipejakob thank you for taking ownership of this pkg'ing issue. |
As discussed here: kubernetes#446 (comment)
I'm working on a script to backfill the old debs/rpms with new builds with the updated dependency. Unfortunately, since this process is still Google-internal, I can only do an internal code/process review for it. |
As an update, I've sent out my backfill script for review internally, and it has already been running to rebuild old versions. I've been spot-checking the artifacts, and things look good, so when it's done, I can upload them to our unstable channels for more testing before promoting to stable. It would be great to get this and #486 checked in, though. I don't have permissions to the release repo, and I don't think we have a bot running here to autosubmit, so we still need someone else to review/approve. @ixdy, do you have bandwidth to review these? |
@thockin can you help with the request from @pipejakob to bring cni 0.6.0 over the finish line? |
I believe we included these commits in #486 which has been merged. |
As discussed here: kubernetes#446 (comment)
As discussed here: kubernetes#446 (comment)
xref kubernetes/kubernetes#49480
The debs/rpms in kubernetes/release needs to be updated as well.
/cc @luxas