-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2020107: Remove run-level label #623
Bug 2020107: Remove run-level label #623
Conversation
/retest |
1 similar comment
/retest |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Background here. Install doesn't seem much slower in the e2e-agnostic presubmit that installs with the new code: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/623/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic/1412624472652910592/artifacts/e2e-agnostic/ipi-install-install/artifacts/.openshift_install.log | tail
time="2021-07-07T04:54:29Z" level=info msg="Install complete!"
time="2021-07-07T04:54:29Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/tmp/installer/auth/kubeconfig'"
time="2021-07-07T04:54:29Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.ci-op-jjir8t9y-3302f.ci.azure.devcluster.openshift.com"
time="2021-07-07T04:54:29Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: REDACTED
time="2021-07-07T04:54:29Z" level=debug msg="Time elapsed per stage:"
time="2021-07-07T04:54:29Z" level=debug msg=" : 16m45s"
time="2021-07-07T04:54:29Z" level=debug msg="Bootstrap Complete: 7m32s"
time="2021-07-07T04:54:29Z" level=debug msg=" Bootstrap Destroy: 4m49s"
time="2021-07-07T04:54:29Z" level=debug msg=" Cluster Operators: 11m31s"
time="2021-07-07T04:54:29Z" level=info msg="Time elapsed: 40m42s" Although that's ~3m slower than the old code used for the e2e-agnostic-upgrade presubmit: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/623/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic-upgrade/1412961787673841664/artifacts/e2e-agnostic-upgrade/ipi-install-install-stableinitial/artifacts/.openshift_install.log | tail
time="2021-07-08T03:21:40Z" level=info msg="Install complete!"
time="2021-07-08T03:21:40Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/tmp/installer/auth/kubeconfig'"
time="2021-07-08T03:21:40Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.ci-op-26hc2j3r-7ee27.ci.azure.devcluster.openshift.com"
time="2021-07-08T03:21:40Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: REDACTED
time="2021-07-08T03:21:40Z" level=debug msg="Time elapsed per stage:"
time="2021-07-08T03:21:40Z" level=debug msg=" : 14m56s"
time="2021-07-08T03:21:40Z" level=debug msg="Bootstrap Complete: 6m59s"
time="2021-07-08T03:21:40Z" level=debug msg=" Bootstrap Destroy: 4m54s"
time="2021-07-08T03:21:40Z" level=debug msg=" Cluster Operators: 16m52s"
time="2021-07-08T03:21:40Z" level=info msg="Time elapsed: 43m47s" I'm not sure if the slowdown is statistically significant or a fluke. Also, the CVO requires labels from the manifest to exist in the in-cluster resource, but we do not clear unrecognised labels, so the update presubmit still has them after updating to the patched release: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/623/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic-upgrade/1412961787673841664/artifacts/e2e-agnostic-upgrade/gather-extra/artifacts/namespaces.json | jq '.items[].metadata | select(.name == "openshift-cluster-version").labels'
{
"kubernetes.io/metadata.name": "openshift-cluster-version",
"name": "openshift-cluster-version",
"olm.operatorgroup.uid/0e8650ee-d36e-47bf-bdb3-b48357056c6b": "",
"openshift.io/cluster-monitoring": "true",
"openshift.io/run-level": "1"
} Two questions:
|
@mcoops: This pull request references Bugzilla bug 2020107, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
|
I suggest building a way to clear the label or annotation. We do it in library-go using a trailing |
/retest |
/retest |
Given the original commit for this was in 2018, it might be possible to remove the label now entirely. However, when doing an upgrade it won't not be applied, hence any clusters which are upgraded still get the run-level. This effectively unsets it, so works for installs and upgrades. Signed-off-by: coops <cooper.d.mark@gmail.com>
/retest e2e-agnostic |
/test e2e-agnostic |
/test e2e-agnostic-upgrade |
I had a quick look at implementing something similar to: openshift/library-go#727, however as we'd need to specify So I think it might be just easier at the moment to set it to an empty string with a comment. That then ensures that on a new install it doesn't install with a run-level set, and then on update it also unsets it. Otherwise if we just remove it from the manifest it only removes it from a new new cluster install. $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/623/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic/1464219753190002688/artifacts/e2e-agnostic/gather-extra/artifacts/namespaces.json | jq '.items[].metadata | select(.name == "openshift-cluster-version").labels'
{
"kubernetes.io/metadata.name": "openshift-cluster-version",
"name": "openshift-cluster-version",
"olm.operatorgroup.uid/b0aeeb7a-918d-4b76-893d-856f61f4bac9": "",
"openshift.io/cluster-monitoring": "true",
"openshift.io/run-level": "",
"pod-security.kubernetes.io/audit": "privileged",
"pod-security.kubernetes.io/enforce": "privileged",
"pod-security.kubernetes.io/warn": "privileged"
} Then on upgrade: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/623/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic-upgrade/1465099749160914944/artifacts/e2e-agnostic-upgrade/gather-extra/artifacts/namespaces.json | jq '.items[].metadata | select(.name == "openshift-cluster-version").labels'
{
"kubernetes.io/metadata.name": "openshift-cluster-version",
"name": "openshift-cluster-version",
"olm.operatorgroup.uid/0d609421-4ec9-4b0c-b3ee-37ef0c060fca": "",
"openshift.io/cluster-monitoring": "true",
"openshift.io/run-level": "",
"pod-security.kubernetes.io/audit": "privileged",
"pod-security.kubernetes.io/enforce": "privileged",
"pod-security.kubernetes.io/warn": "privileged"
} And admits with an SCC assigned: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-version-operator/623/pull-ci-openshift-cluster-version-operator-master-e2e-agnostic-upgrade/1465099749160914944/artifacts/e2e-agnostic-upgrade/gather-extra/artifacts/pods.json | jq '.items[].metadata | select(.name | startswith("cluster-version-operator-")).annotations'
{
"openshift.io/scc": "hostaccess"
} Also matches what the MCO is now doing: openshift/machine-config-operator#2655 Although I got no idea what's wrong with the tests @wking? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
We can come back and check install durations in a week once we have a larger sample size, to gauge any slowdowns.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mcoops, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
@mcoops: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
@mcoops: All pull requests linked via external trackers have merged: Bugzilla bug 2020107 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This blocks us from being associated with SecurityContextConstraints that set 'readOnlyRootFilesystem: true', because from [1]: > The set of SCCs that admission uses to authorize a pod are > determined by the user identity and groups that the user belongs > to. Additionally, if the pod specifies a service account, the set of > allowable SCCs includes any constraints accessible to the service > account. > > Admission uses the following approach to create the final security > context for the pod: > > 1. Retrieve all SCCs available for use. > 2. Generate field values for security context settings that were not > specified on the request. > 3. Validate the final settings against the available constraints. If we leave readOnlyRootFilesystem implicit, we may get associated with a SCC that sed 'readOnlyRootFilesystem: true', and the version-* actions will fail like [2]: $ oc -n openshift-cluster-version get pods NAME READY STATUS RESTARTS AGE cluster-version-operator-6b5c8ff5c8-4bmxx 1/1 Running 0 33m version-4.10.20-smvt9-6vqwc 0/1 Error 0 10s $ oc -n openshift-cluster-version logs version-4.10.20-smvt9-6vqwc oc logs version-4.10.20-smvt9-6vqwc mv: cannot remove '/manifests/0000_00_cluster-version-operator_00_namespace.yaml': Read-only file system mv: cannot remove '/manifests/0000_00_cluster-version-operator_01_adminack_configmap.yaml': Read-only file system ... For a similar change in another repository, see [3]. Also likely relevant, 4.10 both grew pod-security.kubernetes.io/* annotations [4] and cleared the openshift.io/run-level annotation [5]. $ git --no-pager log --oneline -3 origin/release-4.10 -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. f58dd1c (origin/pr/686) install: Add description annotations to manifests 6e5e23e (origin/pr/668) podsecurity: enforce privileged for openshift-cluster-version namespace None of those were in 4.9: $ git --no-pager log --oneline -1 origin/release-4.9 -- install/0000_00_cluster-version-operator_00_namespace.yaml 7009736 (origin/pr/543) Add management workload annotations And all of them landed in 4.10 via master (so they're in 4.10 before it GAed, and in 4.11 and later too): $ git --no-pager log --oneline -4 origin/master -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. [1]: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#admission_configuring-internal-oauth [2]: https://bugzilla.redhat.com/show_bug.cgi?id=2110590#c0 [3]: openshift/cluster-openshift-apiserver-operator#437 [4]: openshift#668 [5]: openshift#623
This blocks us from being associated with SecurityContextConstraints that set 'readOnlyRootFilesystem: true', because from [1]: > The set of SCCs that admission uses to authorize a pod are > determined by the user identity and groups that the user belongs > to. Additionally, if the pod specifies a service account, the set of > allowable SCCs includes any constraints accessible to the service > account. > > Admission uses the following approach to create the final security > context for the pod: > > 1. Retrieve all SCCs available for use. > 2. Generate field values for security context settings that were not > specified on the request. > 3. Validate the final settings against the available constraints. If we leave readOnlyRootFilesystem implicit, we may get associated with a SCC that set 'readOnlyRootFilesystem: true', and the version-* actions will fail like [2]: $ oc -n openshift-cluster-version get pods NAME READY STATUS RESTARTS AGE cluster-version-operator-6b5c8ff5c8-4bmxx 1/1 Running 0 33m version-4.10.20-smvt9-6vqwc 0/1 Error 0 10s $ oc -n openshift-cluster-version logs version-4.10.20-smvt9-6vqwc oc logs version-4.10.20-smvt9-6vqwc mv: cannot remove '/manifests/0000_00_cluster-version-operator_00_namespace.yaml': Read-only file system mv: cannot remove '/manifests/0000_00_cluster-version-operator_01_adminack_configmap.yaml': Read-only file system ... For a similar change in another repository, see [3]. Also likely relevant, 4.10 both grew pod-security.kubernetes.io/* annotations [4] and cleared the openshift.io/run-level annotation [5]. $ git --no-pager log --oneline -3 origin/release-4.10 -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. f58dd1c (origin/pr/686) install: Add description annotations to manifests 6e5e23e (origin/pr/668) podsecurity: enforce privileged for openshift-cluster-version namespace None of those were in 4.9: $ git --no-pager log --oneline -1 origin/release-4.9 -- install/0000_00_cluster-version-operator_00_namespace.yaml 7009736 (origin/pr/543) Add management workload annotations And all of them landed in 4.10 via master (so they're in 4.10 before it GAed, and in 4.11 and later too): $ git --no-pager log --oneline -4 origin/master -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. [1]: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#admission_configuring-internal-oauth [2]: https://bugzilla.redhat.com/show_bug.cgi?id=2110590#c0 [3]: openshift/cluster-openshift-apiserver-operator#437 [4]: openshift#668 [5]: openshift#623
This blocks us from being associated with SecurityContextConstraints that set 'readOnlyRootFilesystem: true', because from [1]: > The set of SCCs that admission uses to authorize a pod are > determined by the user identity and groups that the user belongs > to. Additionally, if the pod specifies a service account, the set of > allowable SCCs includes any constraints accessible to the service > account. > > Admission uses the following approach to create the final security > context for the pod: > > 1. Retrieve all SCCs available for use. > 2. Generate field values for security context settings that were not > specified on the request. > 3. Validate the final settings against the available constraints. If we leave readOnlyRootFilesystem implicit, we may get associated with a SCC that set 'readOnlyRootFilesystem: true', and the version-* actions will fail like [2]: $ oc -n openshift-cluster-version get pods NAME READY STATUS RESTARTS AGE cluster-version-operator-6b5c8ff5c8-4bmxx 1/1 Running 0 33m version-4.10.20-smvt9-6vqwc 0/1 Error 0 10s $ oc -n openshift-cluster-version logs version-4.10.20-smvt9-6vqwc oc logs version-4.10.20-smvt9-6vqwc mv: cannot remove '/manifests/0000_00_cluster-version-operator_00_namespace.yaml': Read-only file system mv: cannot remove '/manifests/0000_00_cluster-version-operator_01_adminack_configmap.yaml': Read-only file system ... For a similar change in another repository, see [3]. Also likely relevant, 4.10 both grew pod-security.kubernetes.io/* annotations [4] and cleared the openshift.io/run-level annotation [5]. $ git --no-pager log --oneline -3 origin/release-4.10 -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. f58dd1c (origin/pr/686) install: Add description annotations to manifests 6e5e23e (origin/pr/668) podsecurity: enforce privileged for openshift-cluster-version namespace None of those were in 4.9: $ git --no-pager log --oneline -1 origin/release-4.9 -- install/0000_00_cluster-version-operator_00_namespace.yaml 7009736 (origin/pr/543) Add management workload annotations And all of them landed in 4.10 via master (so they're in 4.10 before it GAed, and in 4.11 and later too): $ git --no-pager log --oneline -4 origin/master -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. [1]: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#admission_configuring-internal-oauth [2]: https://bugzilla.redhat.com/show_bug.cgi?id=2110590#c0 [3]: openshift/cluster-openshift-apiserver-operator#437 [4]: openshift#668 [5]: openshift#623
This blocks us from being associated with SecurityContextConstraints that set 'readOnlyRootFilesystem: true', because from [1]: > The set of SCCs that admission uses to authorize a pod are > determined by the user identity and groups that the user belongs > to. Additionally, if the pod specifies a service account, the set of > allowable SCCs includes any constraints accessible to the service > account. > > Admission uses the following approach to create the final security > context for the pod: > > 1. Retrieve all SCCs available for use. > 2. Generate field values for security context settings that were not > specified on the request. > 3. Validate the final settings against the available constraints. If we leave readOnlyRootFilesystem implicit, we may get associated with a SCC that set 'readOnlyRootFilesystem: true', and the version-* actions will fail like [2]: $ oc -n openshift-cluster-version get pods NAME READY STATUS RESTARTS AGE cluster-version-operator-6b5c8ff5c8-4bmxx 1/1 Running 0 33m version-4.10.20-smvt9-6vqwc 0/1 Error 0 10s $ oc -n openshift-cluster-version logs version-4.10.20-smvt9-6vqwc oc logs version-4.10.20-smvt9-6vqwc mv: cannot remove '/manifests/0000_00_cluster-version-operator_00_namespace.yaml': Read-only file system mv: cannot remove '/manifests/0000_00_cluster-version-operator_01_adminack_configmap.yaml': Read-only file system ... For a similar change in another repository, see [3]. Also likely relevant, 4.10 both grew pod-security.kubernetes.io/* annotations [4] and cleared the openshift.io/run-level annotation [5]. $ git --no-pager log --oneline -3 origin/release-4.10 -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. f58dd1c (origin/pr/686) install: Add description annotations to manifests 6e5e23e (origin/pr/668) podsecurity: enforce privileged for openshift-cluster-version namespace None of those were in 4.9: $ git --no-pager log --oneline -1 origin/release-4.9 -- install/0000_00_cluster-version-operator_00_namespace.yaml 7009736 (origin/pr/543) Add management workload annotations And all of them landed in 4.10 via master (so they're in 4.10 before it GAed, and in 4.11 and later too): $ git --no-pager log --oneline -4 origin/master -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. [1]: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#admission_configuring-internal-oauth [2]: https://bugzilla.redhat.com/show_bug.cgi?id=2110590#c0 [3]: openshift/cluster-openshift-apiserver-operator#437 [4]: openshift#668 [5]: openshift#623
This blocks us from being associated with SecurityContextConstraints that set 'readOnlyRootFilesystem: true', because from [1]: > The set of SCCs that admission uses to authorize a pod are > determined by the user identity and groups that the user belongs > to. Additionally, if the pod specifies a service account, the set of > allowable SCCs includes any constraints accessible to the service > account. > > Admission uses the following approach to create the final security > context for the pod: > > 1. Retrieve all SCCs available for use. > 2. Generate field values for security context settings that were not > specified on the request. > 3. Validate the final settings against the available constraints. If we leave readOnlyRootFilesystem implicit, we may get associated with a SCC that set 'readOnlyRootFilesystem: true', and the version-* actions will fail like [2]: $ oc -n openshift-cluster-version get pods NAME READY STATUS RESTARTS AGE cluster-version-operator-6b5c8ff5c8-4bmxx 1/1 Running 0 33m version-4.10.20-smvt9-6vqwc 0/1 Error 0 10s $ oc -n openshift-cluster-version logs version-4.10.20-smvt9-6vqwc oc logs version-4.10.20-smvt9-6vqwc mv: cannot remove '/manifests/0000_00_cluster-version-operator_00_namespace.yaml': Read-only file system mv: cannot remove '/manifests/0000_00_cluster-version-operator_01_adminack_configmap.yaml': Read-only file system ... For a similar change in another repository, see [3]. Also likely relevant, 4.10 both grew pod-security.kubernetes.io/* annotations [4] and cleared the openshift.io/run-level annotation [5]. $ git --no-pager log --oneline -3 origin/release-4.10 -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. f58dd1c (origin/pr/686) install: Add description annotations to manifests 6e5e23e (origin/pr/668) podsecurity: enforce privileged for openshift-cluster-version namespace None of those were in 4.9: $ git --no-pager log --oneline -1 origin/release-4.9 -- install/0000_00_cluster-version-operator_00_namespace.yaml 7009736 (origin/pr/543) Add management workload annotations And all of them landed in 4.10 via master (so they're in 4.10 before it GAed, and in 4.11 and later too): $ git --no-pager log --oneline -4 origin/master -- install/0000_00_cluster-version-operator_00_namespace.yaml 539e944 (origin/pr/623) Fix run-level label to empty string. [1]: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#admission_configuring-internal-oauth [2]: https://bugzilla.redhat.com/show_bug.cgi?id=2110590#c0 [3]: openshift/cluster-openshift-apiserver-operator#437 [4]: openshift#668 [5]: openshift#623
The annotation was dropped back in: $ git --no-pager log -1 --oneline 75f34c7 75f34c7 manifests: Remove run-level, insights operator does not need it That landed between 4.2 and 4.3: $ git --no-pager grep openshift.io/run-level origin/release-4.2 -- manifests origin/release-4.2:manifests/02-namespace.yaml: openshift.io/run-level: "1" $ git --no-pager grep openshift.io/run-level origin/release-4.3 -- manifests ...no hits... So clusters which were born in 4.1 or 4.2 may have the old annotation still in place. This commit clears it like openshift/cluster-version-operator@539e944920 (Fix run-level label to empty string, 2021-07-07, openshift/cluster-version-operator#623), so the cluster-version operator will clear the stale annotation.
The label was dropped back in: $ git --no-pager log -1 --oneline 75f34c7 75f34c7 manifests: Remove run-level, insights operator does not need it That landed between 4.2 and 4.3: $ git --no-pager grep openshift.io/run-level origin/release-4.2 -- manifests origin/release-4.2:manifests/02-namespace.yaml: openshift.io/run-level: "1" $ git --no-pager grep openshift.io/run-level origin/release-4.3 -- manifests ...no hits... So clusters which were born in 4.1 or 4.2 may have the old label still in place. This commit clears it like openshift/cluster-version-operator@539e944920 (Fix run-level label to empty string, 2021-07-07, openshift/cluster-version-operator#623), so the cluster-version operator will clear the stale label.
The label was dropped back in: $ git --no-pager log -1 --oneline 75f34c7 75f34c7 manifests: Remove run-level, insights operator does not need it That landed between 4.2 and 4.3: $ git --no-pager grep openshift.io/run-level origin/release-4.2 -- manifests origin/release-4.2:manifests/02-namespace.yaml: openshift.io/run-level: "1" $ git --no-pager grep openshift.io/run-level origin/release-4.3 -- manifests ...no hits... So clusters which were born in 4.1 or 4.2 may have the old label still in place. This commit clears it like openshift/cluster-version-operator@539e944920 (Fix run-level label to empty string, 2021-07-07, openshift/cluster-version-operator#623), so the cluster-version operator will clear the stale label.
The label was dropped back in: $ git --no-pager log -1 --oneline 75f34c7 75f34c7 manifests: Remove run-level, insights operator does not need it That landed between 4.2 and 4.3: $ git --no-pager grep openshift.io/run-level origin/release-4.2 -- manifests origin/release-4.2:manifests/02-namespace.yaml: openshift.io/run-level: "1" $ git --no-pager grep openshift.io/run-level origin/release-4.3 -- manifests ...no hits... So clusters which were born in 4.1 or 4.2 may have the old label still in place. This commit clears it like openshift/cluster-version-operator@539e944920 (Fix run-level label to empty string, 2021-07-07, openshift/cluster-version-operator#623), so the cluster-version operator will clear the stale label.
The label was dropped back in: $ git --no-pager log -1 --oneline 75f34c7 75f34c7 manifests: Remove run-level, insights operator does not need it That landed between 4.2 and 4.3: $ git --no-pager grep openshift.io/run-level origin/release-4.2 -- manifests origin/release-4.2:manifests/02-namespace.yaml: openshift.io/run-level: "1" $ git --no-pager grep openshift.io/run-level origin/release-4.3 -- manifests ...no hits... So clusters which were born in 4.1 or 4.2 may have the old label still in place. This commit clears it like openshift/cluster-version-operator@539e944920 (Fix run-level label to empty string, 2021-07-07, openshift/cluster-version-operator#623), so the cluster-version operator will clear the stale label. Co-authored-by: W. Trevor King <wking@tremily.us>
Given the original commit for this was in 2018, it might be possible to remove the label now entirely. Given #24 is specifically setting it a dependency on the openshift-apiserver, I doubt it,
Will use this PR for testing and further discussion.