Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After uninstalling via helm, some resources remain deployed #98

Open
jdonenine opened this issue Feb 4, 2021 · 0 comments
Open

After uninstalling via helm, some resources remain deployed #98

jdonenine opened this issue Feb 4, 2021 · 0 comments
Labels
bug Something isn't working hard

Comments

@jdonenine
Copy link
Contributor

jdonenine commented Feb 4, 2021

Bug Report

Description

After running helm uninstall adelphi --namespace cass-operator I expected all of the resources deployed by Adelphi to be removed, but because of the way helm doesn't manage custom resources, some are left remaining.

There are good reasons why we might not want the remove the operators, but for a totally clean and isolated install and removal it would be nice to add an optional (perhaps even removed by default assuming that we #93 is addressed) automated removal.

Schema

N/A

Is this a regression?

No

Steps to reproduce

  1. Install Adelphi: helm install adelphi helm/adelphi --namespace cass-operator
  2. Allow the workflow to complete (or not, it probably doesn't matter, but in my case I did let it complete)
  3. Uninstall Adelphi: helm uninstall adelphi --namespace cass-operator
  4. Observe that some resources from the Adelphi deployment remain, even though they are not used for anything:
% kubectl get all --namespace cass-operator
NAMESPACE       NAME                                         READY   STATUS      RESTARTS   AGE
cass-operator   pod/argo-server-5f54db96f7-wzj45             1/1     Running     0          110m
cass-operator   pod/cass-operator-77d974d75b-8m8ds           1/1     Running     0          110m
cass-operator   pod/workflow-controller-749678bd87-g4v5z     1/1     Running     0          110m
NAMESPACE       NAME                                          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
cass-operator   service/argo-server                           ClusterIP      10.43.241.203   <none>        2746/TCP                     110m
cass-operator   service/cass-operator-metrics                 ClusterIP      10.43.21.148    <none>        8383/TCP,8686/TCP            110m
cass-operator   service/cassandradatacenter-webhook-service   ClusterIP      10.43.24.41     <none>        443/TCP                      110m
cass-operator   service/workflow-controller-metrics           ClusterIP      10.43.193.23    <none>        9090/TCP                     110m
NAMESPACE       NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
cass-operator   deployment.apps/argo-server              1/1     1            1           110m
cass-operator   deployment.apps/cass-operator            1/1     1            1           110m
cass-operator   deployment.apps/workflow-controller      1/1     1            1           110m
NAMESPACE       NAME                                               DESIRED   CURRENT   READY   AGE
cass-operator   replicaset.apps/argo-server-5f54db96f7             1         1         1       110m
cass-operator   replicaset.apps/cass-operator-77d974d75b           1         1         1       110m
cass-operator   replicaset.apps/workflow-controller-749678bd87     1         1         1       110m

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-46

@jdonenine jdonenine added bug Something isn't working hard labels Feb 4, 2021
@jdonenine jdonenine added this to the Backlog milestone Feb 4, 2021
@jdonenine jdonenine removed this from the Backlog milestone Jun 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working hard
Projects
None yet
Development

No branches or pull requests

1 participant