Skip to content
This repository has been archived by the owner on Jul 30, 2021. It is now read-only.

pkg/assets/internal: remove critical pod annotations #728

Closed

Conversation

squat
Copy link
Contributor

@squat squat commented Oct 4, 2017

This PR mirrors the work done in coreos/tectonic-installer#1702 to remove the critical pod annotations for control-plane components.
Question: do we also want to remove these annotations from kube-dns and calico?

cc @diegs cc @aaronlevy

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Oct 4, 2017
@diegs
Copy link
Contributor

diegs commented Oct 4, 2017

I think we should remove them from all--they're only relevant for the rescheduler.

Copy link
Contributor

@diegs diegs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM assuming this is all of them.

@dghubble
Copy link
Contributor

dghubble commented Oct 25, 2017

I still spot a few references on calico, calico (canal), and kube-dns. cc @squat

pkg/asset/internal/templates.go:        scheduler.alpha.kubernetes.io/critical-pod: ''
pkg/asset/internal/templates.go:        scheduler.alpha.kubernetes.io/critical-pod: ''
pkg/asset/internal/templates.go:        scheduler.alpha.kubernetes.io/critical-pod: ''

@dghubble
Copy link
Contributor

dghubble commented Nov 1, 2017

Friendly ping @squat or should we modify this in a separate PR?

@squat
Copy link
Contributor Author

squat commented Nov 2, 2017

This fell off my radar. Let me push a new commit

@dghubble
Copy link
Contributor

Superceded by #777

@dghubble dghubble closed this Nov 17, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants