Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPNODE-1632: Support ClusterImagePolicy CRD #4160

Merged
merged 1 commit into from
Apr 11, 2024

Conversation

QiWang19
Copy link
Member

@QiWang19 QiWang19 commented Feb 2, 2024

- What I did
Follow the design from the enhancement. Add the implementation to ContainerruntimConfig controller for policy.json configuration.
- How to verify it

  • verify on cluster with techpriview enabled
apiVersion: config.openshift.io/v1alpha1
kind: ClusterImagePolicy 
metadata:
  name: p0
spec:
  scopes:
    - registry.build03.ci.openshift.org
    - registry.ci.openshift.org
    - example.com/global
  policy:
    rootOfTrust:
      policyType: PublicKey
      publicKey:
        keyData: Zm9vIGJhcg==
    signedIdentity:
      matchPolicy: MatchRepoDigestOrExact
oc create -f /home/qiwan/Documents/test-crds/hands-on-sigstore/clusterimgpolicy.yaml
#check policy.json and /etc/containers/registries.d/sigstore-registries.yaml 
$ oc debug node/ip-10-0-53-139.ec2.internal
Starting pod/ip-10-0-53-139ec2internal-debug-5rftt ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.53.139
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-5.1# cat /etc/containers/policy.json 
{"default":[{"type":"insecureAcceptAnything"}],"transports":{"atomic":{"example.com/global":[{"type":"sigstoreSigned","keyData":"Zm9vIGJhcg==","signedIdentity":{"type":"matchRepoDigestOrExact"}}],"registry.ci.openshift.org":[{"type":"sigstoreSigned","keyData":"Zm9vIGJhcg==","signedIdentity":{"type":"matchRepoDigestOrExact"}}]},"docker":{"example.com/global":[{"type":"sigstoreSigned","keyData":"Zm9vIGJhcg==","signedIdentity":{"type":"matchRepoDigestOrExact"}}],"registry.ci.openshift.org":[{"type":"sigstoreSigned","keyData":"Zm9vIGJhcg==","signedIdentity":{"type":"matchRepoDigestOrExact"}}]},"docker-daemon":{"":[{"type":"insecureAcceptAnything"}]}}}sh-5.1# 
sh-5.1# cat /etc/containers/registries.d/sigstore-registries.yaml 
docker:
  example.com/global:
    use-sigstore-attachments: true
  registry.ci.openshift.org:
    use-sigstore-attachments: true

$ oc describe clusterimagepolicy.config.openshift.io/p0
Name:         p0
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1alpha1
Kind:         ClusterImagePolicy
Metadata:
  Creation Timestamp:  2024-02-29T05:05:19Z
  Generation:          1
  Resource Version:    63708
  UID:                 b18c7cf6-995d-47e7-80a0-4780ed7c52a5
Spec:
  Policy:
    Root Of Trust:
      Policy Type:  PublicKey
      Public Key:
        Key Data:  Zm9vIGJhcg==
    Signed Identity:
      Match Policy:  MatchRepoDigestOrExact
  Scopes:
    registry.build03.ci.openshift.org
    registry.ci.openshift.org
    example.com/global
Status:
  Conditions:
    Last Transition Time:  2024-02-29T05:05:19Z
    Message:               has conflict scope(s) ["registry.build03.ci.openshift.org"] of Openshift payload repository registry.build03.ci.openshift.org/ci-ln-ltipm3t/release, skip the scope(s)
    Observed Generation:   1
    Reason:                ConflictScopes
    Status:                True
    Type:                  Pending
Events:                    <none>

registry.build03.ci.openshift.org did not show in the policy.json because it is the release payload image repo.

  • verify on cluster without techpreview. verify the patch does not fail the upgrade.
$ oc adm upgrade --force=true --allow-explicit-upgrade=true --to-image registry.ci.openshift.org/ocp/release:4.16.0-0.nightly-2024-02-29-062601

  • verified the ClusterImagePolicy CR from the manifests directory got applied by bootstrap.
    - Description for the changelog

Copy link
Contributor

openshift-ci bot commented Feb 2, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 2, 2024
@QiWang19 QiWang19 changed the title Support Cluster ImagePolicy CRD OCPNODE-1632: Support Cluster ImagePolicy CRD Feb 2, 2024
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 2, 2024
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 2, 2024

@QiWang19: This pull request references OCPNODE-1632 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set.

In response to this:

- What I did

- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@QiWang19 QiWang19 force-pushed the cluster-policies branch 2 times, most recently from d31d7ce to c571e3c Compare February 3, 2024 20:38
Copy link
Contributor

openshift-ci bot commented Feb 3, 2024

@QiWang19: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

  • /test 4.12-upgrade-from-stable-4.11-images
  • /test cluster-bootimages
  • /test e2e-aws-ovn
  • /test e2e-aws-ovn-upgrade
  • /test e2e-gcp-op
  • /test e2e-gcp-op-single-node
  • /test e2e-hypershift
  • /test images
  • /test okd-scos-images
  • /test unit
  • /test verify

The following commands are available to trigger optional jobs:

  • /test 4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade
  • /test bootstrap-unit
  • /test e2e-aws-disruptive
  • /test e2e-aws-ovn-fips
  • /test e2e-aws-ovn-fips-op
  • /test e2e-aws-ovn-upgrade-out-of-change
  • /test e2e-aws-ovn-workers-rhel8
  • /test e2e-aws-proxy
  • /test e2e-aws-serial
  • /test e2e-aws-single-node
  • /test e2e-aws-upgrade-single-node
  • /test e2e-aws-workers-rhel8
  • /test e2e-azure
  • /test e2e-azure-ovn-upgrade
  • /test e2e-azure-ovn-upgrade-out-of-change
  • /test e2e-azure-upgrade
  • /test e2e-gcp-op-layering
  • /test e2e-gcp-ovn-rt-upgrade
  • /test e2e-gcp-rt
  • /test e2e-gcp-rt-op
  • /test e2e-gcp-single-node
  • /test e2e-gcp-upgrade
  • /test e2e-metal-assisted
  • /test e2e-metal-ipi
  • /test e2e-metal-ipi-ovn-dualstack
  • /test e2e-metal-ipi-ovn-ipv6
  • /test e2e-openstack
  • /test e2e-openstack-dualstack
  • /test e2e-openstack-externallb
  • /test e2e-openstack-parallel
  • /test e2e-ovirt
  • /test e2e-ovirt-upgrade
  • /test e2e-ovn-step-registry
  • /test e2e-vsphere
  • /test e2e-vsphere-upgrade
  • /test e2e-vsphere-upi
  • /test e2e-vsphere-upi-zones
  • /test e2e-vsphere-zones
  • /test okd-e2e-aws
  • /test okd-e2e-gcp-op
  • /test okd-e2e-upgrade
  • /test okd-e2e-vsphere
  • /test okd-images
  • /test okd-scos-e2e-aws-ovn
  • /test okd-scos-e2e-gcp-op
  • /test okd-scos-e2e-gcp-ovn-upgrade
  • /test okd-scos-e2e-vsphere

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-machine-config-operator-master-bootstrap-unit
  • pull-ci-openshift-machine-config-operator-master-e2e-aws-ovn
  • pull-ci-openshift-machine-config-operator-master-e2e-aws-ovn-upgrade
  • pull-ci-openshift-machine-config-operator-master-e2e-aws-ovn-upgrade-out-of-change
  • pull-ci-openshift-machine-config-operator-master-e2e-azure-ovn-upgrade-out-of-change
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-op
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-op-layering
  • pull-ci-openshift-machine-config-operator-master-e2e-gcp-op-single-node
  • pull-ci-openshift-machine-config-operator-master-e2e-hypershift
  • pull-ci-openshift-machine-config-operator-master-images
  • pull-ci-openshift-machine-config-operator-master-okd-images
  • pull-ci-openshift-machine-config-operator-master-okd-scos-e2e-aws-ovn
  • pull-ci-openshift-machine-config-operator-master-okd-scos-images
  • pull-ci-openshift-machine-config-operator-master-unit
  • pull-ci-openshift-machine-config-operator-master-verify

In response to this:

/test required

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@QiWang19 QiWang19 marked this pull request as ready for review February 4, 2024 04:34
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 4, 2024
@QiWang19
Copy link
Member Author

QiWang19 commented Feb 5, 2024

/retest-required

@QiWang19
Copy link
Member Author

QiWang19 commented Feb 6, 2024

@mtrmac This is ready for review. Could you take a look?

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 6, 2024

@QiWang19: This pull request references OCPNODE-1632 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set.

In response to this:

- What I did
Follow the design from the enhancement. Add the implementation to ContainerruntimConfig controller for policy.json configuration.
- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A fairly brief look for now, mostly demonstrating my k8s ignorance. Looks reasonable to me overall.

I didn’t read the tests at all.

manifests/machineconfigcontroller/clusterrole.yaml Outdated Show resolved Hide resolved
pkg/daemon/constants/constants.go Outdated Show resolved Hide resolved
@@ -284,6 +308,62 @@ func (ctrl *Controller) itmsConfDeleted(_ interface{}) {
ctrl.imgQueue.Add("openshift-config")
}

func (ctrl *Controller) clusterImagePolicyAdded(_ interface{}) {
if !ctrl.sigstoreAPIEnabled() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t know, how does this usually work?

It seems to me that either the feature gates are constant throughout the life of the controller, and then we shouldn’t even be observing the CRs if the gate API is disabled; or they can change during the life of the controller, and then something should trigger a re-sync on feature gate change, and I can’t see that happening.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MCO components use the default featuregateaccessor configuration,

featureGateAccessor := featuregates.NewFeatureGateAccess(
desiredVersion, missingVersion,
configSharedInformer.Config().V1().ClusterVersions(), configSharedInformer.Config().V1().FeatureGates(),
recorder,
)
go featureGateAccessor.Run(ctx)
, this means that, if the feature gate levels change, the controllers exit and restart.

On restart, the feature gates will be up to date with the new level.

pkg/controller/container-runtime-config/helpers.go Outdated Show resolved Hide resolved
pkg/controller/container-runtime-config/helpers.go Outdated Show resolved Hide resolved
)
switch policy.RootOfTrust.PolicyType {
case apicfgv1alpha1.PublicKeyRootOfTrust:
keyDataDec, err := b64.StdEncoding.DecodeString(policy.RootOfTrust.PublicKey.KeyData)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The explicit base64 handling surprises me. Wouldn’t it be possible to make the API type a []byte? At least https://github.com/kubernetes/api/blob/f3648a53522eb60ea75d70d36a50c799f7e4e23b/core/v1/types.go#L6838 suggests that might work.

Copy link
Member Author

@QiWang19 QiWang19 Feb 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return sigstorePolicyRequirement, nil
}

func validateClusterImagePolicyWithAllowedBlockedRegistries(clusterScopePolicies map[string]signature.PolicyRequirements, allowedRegs, policyBlocked []string) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An idle thought… is this sufficient?

  • If example.com is blocked, but a policy exists for example.com/repo, that effectively unblocks that namespace
  • If only example.com is allowed, but a policy exists for example.invalid, that effectively unblocks that namespace

It’s quite possible this was already discussed and designed in the enhancement.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The enhancement only mentioned rejection if the scopes and allowed/blocked list are the same, it hasn't been discussed the case the scopes are nested inside.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtrmac What are your thoughts on this?

  • if blockedlist exists, the clusterimagepolicy scopes must not equal to or nested under blockedRegistries
  • if allowedlist exists, the clusterimagepolicy scopes nested under the allowedRegistries

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t really have a strong opinion here — on the one hand the behavior in the current version of the code is pretty surprising, OTOH handling conflicts by ~ignoring configuration and MCO just reporting something in a status is probably also not very intuitive, and minimizing the cases where configuration is ignored that way seems valuable.

But it also feels like an instance of a general design concern where the k8s universe should have a general best practice. I wouldn’t know what that is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I didn’t see your last comment before)

  • if blockedlist exists, the clusterimagepolicy scopes must not equal to or nested under blockedRegistries
  • if allowedlist exists, the clusterimagepolicy scopes nested under the allowedRegistries

I agree this makes sense as a consistent configuration.

What happens when we go from a consistent configuration to an inconsistent one? E.g. supporting (allowed=A; CIP scopes = [A, A/repo]), and then an admin adds a new CIP scope B.

Is the effect that we retain the current constant MachineConfigs, or would the code fall open and ignore everything?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the effect that we retain the current constant MachineConfigs, or would the code fall open and ignore everything?

retain the current constant MachineConfigs, the controller would fail to sync the new inconsistent CIP.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great :) In that case the extra checks proposed in #4160 (comment) look good to me.

Copy link
Contributor

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just ~15-minute look over the tests for now…


f.expectCreateMachineConfigAction(mcs2)
f.expectGetMachineConfigAction(clusterimgPolicyMCs2)
f.expectCreateMachineConfigAction(clusterimgPolicyMCs2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No expectation of a CIP access?

(I would also think that the existing tests should now start accessing CIP, when the feature gate is hard-coded to on. … and maybe there should be tests that the feature gate works.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

current action verification tests only check the machineconfiguration client, it does not include the config, operator API client, and the list of resources intended to be filtered out. We can improve the test to include other client, but not in this PR.

// filterInformerActions filters list and watch actions for testing resources.
// Since list and watch don't change resource state we can filter it to lower
// noise level in our tests.
func filterInformerActions(actions []core.Action) []core.Action {

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -172,6 +172,7 @@ func createControllers(ctx *ctrlcommon.ControllerContext) []ctrlcommon.Controlle
ctx.ConfigInformerFactory.Config().V1().Images(),
ctx.ConfigInformerFactory.Config().V1().ImageDigestMirrorSets(),
ctx.ConfigInformerFactory.Config().V1().ImageTagMirrorSets(),
ctx.ConfigInformerFactory.Config().V1alpha1().ClusterImagePolicies(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You only want to add this if the feature gate is enabled, else the informer when it starts will try to access an API type that does not exist

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JoelSpeed This seems ok from the ci logs? around Starting MachineConfigController-ContainerRuntimeConfigController, there's no accessing API error

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

W0214 19:33:25.385128       1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1alpha1.ClusterImagePolicy: the server could not find the requested resource (get clusterimagepolicies.config.openshift.io)
E0214 19:33:25.385182       1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.ClusterImagePolicy: failed to list *v1alpha1.ClusterImagePolicy: the server could not find the requested resource (get clusterimagepolicies.config.openshift.io)

This is repeated every 45 seconds through the logs showing the informer is having issues starting

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You only want to add this if the feature gate is enabled, else the informer when it starts will try to access an API type that does not exist

Tried to fix this review by commit 2cc6e6d
The featuregete will have a value only when the (*Controller) Run() is called (featuregate from start.go). So I added the ctrl.enableSigstoreAPIHandler() inside the (*Controller) Run() .
But The cluster failed to launch with techpreview turned on. e2e-gcp-op-techpreview logs did not log Starting MachineConfigController-ContainerRuntimeConfigController after the featuregate observed as true. The containerruntimeconfig controller did not start. maybe blocked on L234 cache.WaitForCacheSync

if ctrl.sigstoreAPIEnabled() {
ctrl.enableSigstoreAPIHandler()
listerCaches = append(listerCaches, ctrl.clusterImagePolicyListerSynced)
}
if !cache.WaitForCacheSync(stopCh, listerCaches...) {
return
}
klog.Info("Starting MachineConfigController-ContainerRuntimeConfigController")

If not add listerCaches = append(listerCaches, ctrl.clusterImagePolicyListerSynced),
the cluster can launch, but the controller did not react to clusterimagepolicy objects created, it seems the ctrl.configInformerFactory.Config().V1alpha1().ClusterImagePolicies().Informer().AddEventHandler did not add listener, but this call to AddEventHandler did not return an error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JoelSpeed From the finding above #4160 (comment), I am not sure how to add listener for the ClusterImagePolicy after the featuregate is enabled. Is it because I can not call AddEventHandler after the sharedInformer is running?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should know from startup time whether the feature gate is enabled or not, the featuregateaccessor should be synced before you even get to registering your listeners, so, this should be a simple conditional logic around the listener initialisation and event handler adding

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently in the start.go, all controllers added listeners before detecting the featuregate.
I found I have to start the SharedInformerFactory again in the container runtime config controller Run() to let the AddEventHandler take effect.

@QiWang19 QiWang19 force-pushed the cluster-policies branch 8 times, most recently from 7dd1c3d to a414fe7 Compare February 15, 2024 19:29
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 29, 2024
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 30, 2024
@yuqi-zhang
Copy link
Contributor

yuqi-zhang commented Apr 2, 2024

Right, I was mostly concerned about any merge conflicts, although it looks like #4295 reverted the above, so we'll have to get that back in...

Although there might not have been any conflicts, so maybe we can ignore that 😅

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a pass and overall structure makes sense to me. Leaving an approval for now (based on the comments, looks like we got the FG working for this, nice!)

Main concern inline is the conflict with NodeDisruptionPolicy object, we should definitely discuss this at some point

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you copying these over by hand? I think these can be added to our hack/sync-crd script so they can be auto updated if the vendored content from API were to change

@@ -461,6 +464,8 @@ func calculatePostConfigChangeActionFromFileDiffs(diffFileSet []string) (actions
actions = []string{postConfigChangeActionReloadCrio}
} else if ctrlcommon.InSlice(path, filesPostConfigChangeActionRestartCrio) {
actions = []string{postConfigChangeActionRestartCrio}
} else if ctrlcommon.InSlice(filepath.Dir(path), dirsPostConfigChangeActionReloadCrio) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, for the initial implementation of nodeDiruptionPolicy we don't actually allow wildcarding of paths like this (so all of these will eventually get converted to an API object)

I guess since that's tech preview we can always update the API or MCO processing to allow that but just a note that these are technically conflicting

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 2, 2024
@QiWang19
Copy link
Member Author

QiWang19 commented Apr 8, 2024

@yuqi-zhang PTAL

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

I can't speak to the details of the policies but the logic in bootstrap/containerruntime seems safe to me. Also since this is all behind FG, I'm fine with merging as is and we can follow up any concerns for GA

@@ -17,7 +17,7 @@ for crd in "${CRDS_MAPPING[@]}" ; do
done

#this one goes in manifests rather than install, but should it?
cp "vendor/github.com/openshift/api/config/v1alpha1/0000_10_config-operator_01_clusterimagepolicy-TechPreviewNoUpgrade.crd.yaml" "install/0000_10_config-operator_01_clusterimagepolicy-TechPreviewNoUpgrade.crd.yaml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

technically this might conflict with the MCO api PR (and the locations will be moving) but for now this should be ok

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 9, 2024
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 8201133 and 2 for PR HEAD 08908bd in total

@QiWang19
Copy link
Member Author

/retest-required

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2024
@yuqi-zhang
Copy link
Contributor

/lgtm

retagging

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2024
Copy link
Contributor

openshift-ci bot commented Apr 10, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: QiWang19, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 0968179 and 2 for PR HEAD cf48c60 in total

@QiWang19
Copy link
Member Author

/retest

@yuqi-zhang
Copy link
Contributor

/override ci/prow/e2e-hypershift

known failures due to CI outage. Has passed in the past

Copy link
Contributor

openshift-ci bot commented Apr 10, 2024

@yuqi-zhang: Overrode contexts on behalf of yuqi-zhang: ci/prow/e2e-hypershift

In response to this:

/override ci/prow/e2e-hypershift

known failures due to CI outage. Has passed in the past

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 23df276 and 1 for PR HEAD cf48c60 in total

Copy link
Contributor

openshift-ci bot commented Apr 11, 2024

@QiWang19: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/okd-scos-e2e-aws-ovn cf48c60 link false /test okd-scos-e2e-aws-ovn

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 642fa6b into openshift:master Apr 11, 2024
16 of 17 checks passed
@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

This PR has been included in build ose-machine-config-operator-container-v4.16.0-202404120544.p0.g642fa6b.assembly.stream.el9 for distgit ose-machine-config-operator.
All builds following this will include this PR.

@QiWang19 QiWang19 changed the title OCPNODE-1632: Support Cluster ImagePolicy CRD OCPNODE-1632: Support ClusterImagePolicy CRD Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants