Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.11 Manual CRD Update conflicts with helm-installed CRDs #1716

Closed
3 tasks done
andrew-landsverk-win opened this issue Feb 15, 2024 · 5 comments
Closed
3 tasks done

Comments

@andrew-landsverk-win
Copy link

Please confirm the following

  • I agree to follow this project's code of conduct.
  • I have checked the current issues for duplicates.
  • I understand that the AWX Operator is open source software provided for free and that I might not receive a timely response.

Bug Summary

After updating from 2.10.0 to 2.11.0 using Helm I noticed in the known issues section of the Release Notes that we need to manually run kubectl to update some CRDs. For our platform, only the first CRD for the new meshes was created, the rest failed with errors because of managers. I do worry about this new CRD causing issues in the future when helm tries to update them also.

AWX Operator version

2.11.0

AWX version

23.7.0

Kubernetes platform

kubernetes

Kubernetes/Platform version

1.27.8

Modifications

no

Steps to reproduce

  1. Install awx-operator using Helm from an older version such as 2.10.0
  2. Upgrade awx-operator using Helm to 2.11.0
  3. Attempt to run the provided kubectl command to update the CRDS kubectl apply --server-side -k github.com/ansible/awx-operator/config/crd?ref=2.11.0

Expected results

All kubectl commands to execute successfully and CRDs to be updated accordingly.

Actual results

[user@machine:~]$ kubectl apply --server-side -k github.com/ansible/awx-operator/config/crd?ref=2.11.0
customresourcedefinition.apiextensions.k8s.io/awxmeshingresses.awx.ansible.com serverside-applied
Apply failed with 1 conflict: conflict with "helm" using apiextensions.k8s.io/v1: .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 1 conflict: conflict with "helm" using apiextensions.k8s.io/v1: .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 1 conflict: conflict with "helm" using apiextensions.k8s.io/v1: .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts

Additional information

No response

Operator Logs

No response

@kurokobo
Copy link
Contributor

kurokobo commented Feb 16, 2024

Hi, thanks for filling the issue.

Short answer:

To upgrade CRD, as described in the error message, just append --force-conflicts to your kubectl. This is safe for most cases:

kubectl apply --server-side -k github.com/ansible/awx-operator/config/crd?ref=2.11.0 --force-conflicts

Long answer:

During installation for CRDs by helm on the first deployment, "helm" is set as the manager of the fields of CRDs. This prevents unintended changes to the CRDs by other managers than "helm".

$ kubectl get crd awxs.awx.ansible.com -o yaml --show-managed-fields
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  ...
  managedFields:
  - apiVersion: apiextensions.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:helm.sh/chart: {}
      f:spec:
        f:conversion:
          .: {}
          f:strategy: {}
        f:group: {}
        f:names:
          f:kind: {}
          f:listKind: {}
          f:plural: {}
          f:singular: {}
        f:scope: {}
        f:versions: {}
    manager: helm  ✅
    operation: Update
    ...

You are not "helm" of course, so when you try to upgrade the CRDs with kubectl you get errors as you faced.
The --force-conflicts option is an option to ignore these conflicts and force to apply of the CRD changes.

If there are destructive changes to the CRDs, this forced replacement of CRDs may affect your CRs, but there is no reason not to upgrade the CRDs as long as the Operator operates under the assumption of new CRDs. I am not a member of any Ansible related team, but it appears that the CRDs in this repository have been updated under the assumption of maintaining backward compatibility as much as possible.

Also, since Helm does not upgrade CRDs at all as documented, at least in the current implementation, the possibility that manual replacement of CRDs will affect Helm's behavior is small, unless Helm itself undergoes a massive change.

Maybe the option --force-conflicts also should be documented.

@fosterseth
Copy link
Member

we may want to add some instruction for --force-conflicts on this release page
https://github.com/ansible/awx-operator/releases/tag/2.11.0

@fosterseth
Copy link
Member

fosterseth commented Feb 21, 2024

@andrew-landsverk-win can you try adding the --force-conflicts and let us know if that works for you? thank you!

@andrew-landsverk-win
Copy link
Author

@andrew-landsverk-win can you try adding the --force-conflicts and let us know if that works for you? thank you!

Looks like it works with --force-conflicts

[user@system:~/code/ansible]$ kubectl apply --server-side -k github.com/ansible/awx-operator/config/crd?ref=2.11.0 --force-conflicts
Warning: Detected changes to resource awxbackups.awx.ansible.com which is currently being deleted.
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/awxmeshingresses.awx.ansible.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com serverside-applied

Thank you!

@oraNod oraNod added the Helm label Sep 3, 2024
@oraNod
Copy link
Contributor

oraNod commented Sep 4, 2024

Hi @andrew-landsverk-win

The helm chart code has moved to a new repository, ansible/awx-operator-helm. You can find more information about this move in the recent forum post about changes to the AWX operator installation methods.

We now plan to close this issue because it is no longer relevant to the code in this repository. If you think the issue is still valid and needs to be fixed, please recreate it in the ansible/awx-operator-helm repository.

Thank you.

@oraNod oraNod closed this as completed Sep 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants