-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add Kubernetes 1.22 to CI (Debug status subresource admission errors) #11805
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: julz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov Report
@@ Coverage Diff @@
## main #11805 +/- ##
=======================================
Coverage 87.81% 87.81%
=======================================
Files 196 196
Lines 9394 9394
=======================================
Hits 8249 8249
Misses 889 889
Partials 256 256 Continue to review full report at Codecov.
|
well, looks like good news we can reproduce a problem, bad news we have a problem! cc @dprotaso @markusthoemmes fyi |
from the logs, consistent with #11448:
|
looks like the error is probably coming from somewhere around here. Wonder if pre-1.22 subresources weren't sent to web hooks and now they are? |
This reverts commit 3c72a7b. Well, that worked, let's figure out why.
5f3a248
to
53e6a08
Compare
53e6a08
to
27cd6f8
Compare
@julz: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
looks like managedFields now has a subresources field as of kubernetes/kubernetes#100970 which our types dont have. I guess we may need to bump our k8s libs (or hack a workaround) to support 1.22 🤔 |
tho seems like we'd need at least 1.22 libs to help here since that's when the field arrived |
I think this needs deps on knative/pkg to be aligned with k8s 0.22+ deps - see: knative/pkg#2145 See also a client problems I've occurred bumping up to newer version knative/client#1209 |
The offender is actually Lines 121 to 122 in 6a54ed8
which causes To throw us out. Veeeeery unfortunate as this is actually designed to only disallow fields in "our" types. Flipping this flag makes everything (at least on the surface) work as expected. Thinking about the least invasive fix for this... |
Confirmed that patching the diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go
index d84878d7c..522336cba 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go
@@ -1158,6 +1158,15 @@ type ManagedFieldsEntry struct {
// FieldsV1 holds the first JSON version format as described in the "FieldsV1" type.
// +optional
FieldsV1 *FieldsV1 `json:"fieldsV1,omitempty" protobuf:"bytes,7,opt,name=fieldsV1"`
+
+ // Subresource is the name of the subresource used to update that object, or
+ // empty string if the object was updated through the main resource. The
+ // value of this field is used to distinguish between managers, even if they
+ // share the same name. For example, a status update will be distinct from a
+ // regular update using the same manager name.
+ // Note that the APIVersion field is not related to the Subresource field and
+ // it always corresponds to the version of the main resource.
+ Subresource string `json:"subresource,omitempty" protobuf:"bytes,8,opt,name=subresource"`
}
// ManagedFieldsOperationType is the type of operation which lead to a ManagedFieldsEntry being created.
|
Slack thread here where we landed on a different approach but.. I personally don't actually hate the idea of patching that much. It'll disappear once we get a cherry pick 🤷 |
This has served its purpose: we understand the problem and have various ideas for how to fix it. I think the easiest/simplest thing for serving (although perhaps not Tekton, unfortunately, so we'll need to think about that) is to set DisallowUnknownFields back to false since we have schema validation performing the same job now, and that seems to best match the semantics and intended usage of upstream k8s, based on our conversations with them. I'll open a separate PR for us to discuss that. /close |
@julz: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
No description provided.