-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify the deployment reconciliation logic #1041
Comments
This created issues when updating the operator from beta1 to beta3, as for the deployments the I'm not sure how this operator hash is calculated and whether it changes when updating the operator again. If it does not, I'd consider a one-off failure as acceptable, especially as it's still in beta state. However if that hash changes with every operator upgrade, the hash logic would break these upgrades. |
Hum ... this is strange. The PR for this issue is adding the hash to the deployments.metadata.labels field (which is not immutable). I think the reconciliation error was caused by #1109 and/or #1093 (where we indeed modified the labels from the
Sorry about the issues it caused. I tried to warn about the breaking change(s) in the PR messages, but, as I can see, it is probably not a good notification mechanism. For now on, I'll also add the possible breaking changes in the release and CHANGELOG pages. |
Indeed, in that case I have probably confused my analysis with the actual issue I had. Restarted everything, now it's fine and the logs were gone, but it showed something along the lines of Yup, as a user I would expect these kind of changes to be documented in the release and changelog (well, both are essentially the same for this project) and it would be great to see them there going forward. |
Is your feature request related to a problem? Please describe.
We are allowing more and more configurations to the deployments and the reconciliation logic to verify if they have changed is becoming too complex.
Describe the solution you'd like
We could use the same idea that is used by k8s core deployment controller (add a label with the hash from pod spec and if the hash changed it means that the spec changed).
Additional context
The "hash strategy" to verify a configuration drift between the actual and the desired state will not address the situation in which someone manually modified the deployment.
With the "equality.Semantic.DeepEqual" (currently in use) we need to check each/every field.
Maybe using a dry-run create/update (to get all the mutated fields) with the "hash strategy" and comparing it with the hash from running spec will address all scenarios. The only drawback in this approach is that a new API request will be needed, but considering that it will run only when a reconciliation event arrives the overhead can be considered insignificant/negligible.
The text was updated successfully, but these errors were encountered: