-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider providing a storage migration job that users can run after an upgrade. #1224
Comments
cc @tektoncd/core-maintainers too as this would apply to |
And the cool thing is that it should just be the matter of updating the list of resources that need to have their storages migrated like so: So no changes are required to add additional resources. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/lifecycle frozen |
Feature request
Consider providing a tool for upgrading the stored versions on the API server for various versions.
Use case
For example, EventListener has two supported API versions, v1alpha1 and v1beta1
https://github.com/tektoncd/triggers/blob/main/config/300-eventlistener.yaml#L74
https://github.com/tektoncd/triggers/blob/main/config/300-eventlistener.yaml#L39
At any given time only one storage version is supported by the CRD configuration like above. However, if there's a cluster that started with storage v1alpha1, then v1beta1 was introduced, the cluster will have a mixed versions stored in the API server.
This is fine however, as mentioned in the #1223 we should provide a conversion webhook that means a user can request either version and the correct version is returned even if the schemas are different.
As part of the cleanups however, the previous storage versions should be upgraded so that eventually we could drop the previously supported versions (say, like v1alpha1 at some point should probably deprecated and removed?). As per:
https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#previous-storage-versions
there's a tool that users could run. However, in Knative, we decided to have tools (k8s jobs) that would at each release upgrade the necessary storage versions. Just wanted to throw this out there, since it seemed to resonate with users.
One such example is here:
knative/eventing#3168
The text was updated successfully, but these errors were encountered: