Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: cannot re-use a name that is still in use #425

Closed
hazcod opened this issue Feb 27, 2020 · 33 comments
Closed

Error: cannot re-use a name that is still in use #425

hazcod opened this issue Feb 27, 2020 · 33 comments

Comments

@hazcod
Copy link

hazcod commented Feb 27, 2020

Terraform Version

v0.12.21

Affected Resource(s)

  • helm_release

Terraform Configuration Files

https://github.com/ironPeakServices/infrastructure/tree/master/modules/helm

Debug Output

https://github.com/ironPeakServices/infrastructure/runs/471718289?check_suite_focus=true

Expected Behavior

successful terraform apply

Actual Behavior

Error: cannot re-use a name that is still in use

  on modules/helm/istio.tf line 9, in resource "helm_release" "istio_init":
   9: resource "helm_release" "istio_init" {

Steps to Reproduce

  1. terraform apply in CI/CD of https://github.com/ironPeakServices/infrastructure

Important Factoids

n/a

References

https://github.com/terraform-providers/terraform-provider-helm/blob/cfda75d3bd4770fa49a2ef057fe89eb5b8d8eb69/vendor/helm.sh/helm/v3/pkg/action/install.go#L358

@marpada
Copy link

marpada commented Apr 25, 2020

I bumped into the error after cancelling a previous run (the cloud provider stack on fulfilling a PVC request). That left some orphan resources (svc,cm,sa,...), after deleting them I could re-run the apply just fine.

@aqabawe
Copy link

aqabawe commented Jul 1, 2020

@hazcod did you manage to solve this? I'm running into the same issue

@hazcod
Copy link
Author

hazcod commented Jul 1, 2020

@aqabawe not really :-(

@aareet aareet added the bug label Jul 2, 2020
@dniel
Copy link

dniel commented Jul 4, 2020

I had the same problem, and after exploring all kinds of resources in k8s, I found a secret "sh.helm.release.v1.spa-demo.v1" that had been left behind after a partial deployment of my app "spa-demo" with a crashing container.
I had to examine every resource I could think of to look for left behind resources from the failed deploy and remove them manually. Found most of them quickly, but the secret was easy to miss and also the service account.

@pcfleischer
Copy link

workaround is definitely as stated in previous comment:

search and delete all related helm resource by name: secrets, services, deployments, etc.

@marcellodesales
Copy link

Same here... I started with helm list and then tried to find objects... Remember to use the appropriate namespace...

  • Deleted deployment, found secret
$ helm list -n kube-system
NAME        	NAMESPACE  	REVISION	UPDATED                             	STATUS  	CHART             	APP VERSION

$ kubectl get deployments -n kube-system
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
coredns        2/2     2            2           92m
external-dns   0/1     1            0           3m47s

$ kubectl delete deployment -n kube-system external-dns
deployment.apps "external-dns" deleted

$ kubectl -n kube-system get secrets | grep helm
sh.helm.release.v1.external-dns.v1               helm.sh/release.v1                    1      2m54s

$ kubectl delete secret sh.helm.release.v1.external-dns.v1 -n kube-system
secret "sh.helm.release.v1.external-dns.v1" deleted
  • Everything worked again re-running terraform apply

@carbohydrates
Copy link

The possible solution to import that existing release to terraform state $ terraform import helm_release.example default/example-name.
https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#import

@milosgajdos
Copy link

This happens on a newly built cluster, too. I suspect this is more of a helm shenanigans rather than tf provider which merely leverages helm.

@jtheo
Copy link

jtheo commented Mar 31, 2021

This happens on a newly built cluster, too. I suspect this is more of a helm shenanigans rather than tf provider which merely leverages helm.

Not sure if it can help, but I just had the same error when the AWS Token expired in the middle of an apply (sigh).

Terraform: v0.14.8
helm_provider: 1.3.2
helm: 3.5.3

I found this that solved: helm/helm#4174 (comment)

With Helm 3, all releases metadata are saved as Secrets in the same Namespace of the release.
If you got "cannot re-use a name that is still in use", this means you may need to check some orphan secrets and delete them

kubectl -n ${NAMESPACE} delete secret -lname=${HELM_RELEASE}

@aidan-melen
Copy link

aidan-melen commented Jul 14, 2021

With Helm 3, all releases metadata are saved as Secrets in the same Namespace of the release.

Thank you Thank you! In my case, the AWS Token expired after the Helm metadata Secret was created, but before the Helm release was installed. On subsequent runs, Helm would see the Secret metadata and error with:

Error: cannot re-use a name that is still in use

Even though the Helm release was not installed. This was really confusing.

Helm should check the metadata Secret AND the existence of the release to determine if the release is being re-used...

@lbornov2
Copy link

lbornov2 commented Aug 4, 2021

This has been open for over a year. Are you planning to fix this?

@Dniwdeus
Copy link

Dniwdeus commented Aug 6, 2021

I ran into that a couple of times ... usually it relates to a terraform run that was interrupted or something similar ... as far as I know helm creates secrets to keep track of the version of a deployment ... as @dniel and @marcellodesales already mentioned deleting release related resources - most times it's just the sh.helm.release.v1.MYRELEASENAME.v1 secret in target namespace - will solve the problem ... not sure if this is actually an open bug (anymore) 🤔 ... one might state it's a known behavior by now ;))

@johnwesley
Copy link

Had a similar issue. Look for and remove any Helm related secrets.

@nwsparks
Copy link

nwsparks commented Oct 7, 2021

setting atomic true helps with this but it makes debugging failed deployments annoying since it deletes all the resources.

@mathisve
Copy link

When looking for resources left behind it might be handy to make use of the labels Helm uses.

kubectl get all -A -l app.kubernetes.io/managed-by=Helm shows you all the resources created by helm

@aymericbeaumet
Copy link

This helped us find all the resources attached to a specific Helm release (source):

kubectl get all --all-namespaces -l='app.kubernetes.io/managed-by=Helm,app.kubernetes.io/instance=release-name'

@korenlev
Copy link

korenlev commented Apr 3, 2022

same issue here

@keskad
Copy link

keskad commented Apr 7, 2022

This provider seems unusable because of this issue.

@Maelstromeous
Copy link

Please can we update the provider to list exactly what resources it is conflicting with, it took ages to figure out that it left over some secrets from a botched rollout.

@mnhat3896
Copy link

Have you checked your state file? Is it a remote-state or local-state. It's always add instead of change because the helm_release resource does not stay in the state file anymore. I'm not sure with 100% percent but hope this can help somehow

@csjiang
Copy link

csjiang commented Oct 18, 2022

Also having this issue w/ the following versions:

Terraform v0.15.3
on linux_arm64
+ provider registry.terraform.io/hashicorp/aws v4.35.0
+ provider registry.terraform.io/hashicorp/helm v2.7.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.6.1
+ provider registry.terraform.io/hashicorp/local v2.2.3
+ provider registry.terraform.io/hashicorp/template v2.2.0

@loganmzz
Copy link

It happens for me after timeout / auth failure againt K8S API during Helm processing.

However, taking a look at release state is properly displayed in pending-install. So even if it won't unblock, Helm Terraform provider shouldn't report a re-use issue but a pending one.

You still have to manually uninstall (for first release) or rollback but still provide a clear explanation of the situation and let you know remediation process instead of letting you in darkness ...

@solomonraj-a-presidio
Copy link

solomonraj-a-presidio commented May 2, 2023

Faced this exact issue,ran the command kubectl get all --all-namespaces -l='app.kubernetes.io/managed-by=Helm,app.kubernetes.io/instance=release-name' mentioned by @aymericbeaumet which showed the secret that was orphaned, even after deleting that couldn't get it working.

@mluds
Copy link

mluds commented Jul 31, 2023

In my case I didn't have any Helm resources in my cluster (using kubectl get all -A -l app.kubernetes.io/managed-by=Helm).

What fixed it for me was deleting .terraform.lock.hcl and .terragrunt-cache (we're also using Terragrunt).

@n-oden
Copy link
Contributor

n-oden commented Aug 28, 2023

I think it's worth pointing out that, at least from my perspective, this boils down to the provider's behavior being non-intuitively different than the helm CLI.

If I'm using the CLI directly, it is safe, normal, and entirely expected to do something approximating:

helm upgrade --install --atomic mychartinstall myrepo/mychart

And in nearly all possible states of the chart on the cluster (not yet installed, fully installed, installed but some resources missing/deleted) helm will attempt to bring the chart up to the latest version and create all of the chart's resources.

But as far as I can tell, there is no way to approximate this behavior with the helm provider. It is, effectively, running helm install rather than helm upgrade --install, and if there is no record of the chart in terraform state but there are any helm metadata secrets present on the cluster (for example if a CI/CD system has installed the chart before terrform ran, or if a previous terraform run failed without the atomic option being set), you will get the "Cannot re-use a name that is still in use" error.

I'm going to work up a pull request to address this, but it would be good to know in advance if the maintainers here have an opinion about what kind of approach they'd like to see: a set of attributes to the helm_release resource that lets the user specify "upgrade" behavior, or a new resource since this is a fairly different use case. (Absent any direction I'll try the former.)

Also it would be nice to know if this provider is going to stay MPL-licensed or if it also is going to be re-licensed under the BSL, as my interest in contributing in the latter case is strictly zero.

@loganmzz
Copy link

But as far as I can tell, there is no way to approximate this behavior with the helm provider. It is, effectively, running helm install rather than helm upgrade --install, and if there is no record of the chart in terraform state

That's Terraform way and on this is the expected behavior. You must adopt/import externally managed resources or otherwise it is a data.

Try to create manually a VM on your Cloud provider, and then try to create it from Terraform, you will get the same kind of duplication error.

but there are any helm metadata secrets present on the cluster (for example if a CI/CD system has installed the chart before terrform ran, or if a previous terraform run failed without the atomic option being set), you will get the "Cannot re-use a name that is still in use" error.

This is not the issue here. In this issue, Helm release is entirely managed by Terraform, but you still encounter the issue.

I'm going to work up a pull request to address this, but it would be good to know in advance if the maintainers here have an opinion about what kind of approach they'd like to see: a set of attributes to the helm_release resource that lets the user specify "upgrade" behavior, or a new resource since this is a fairly different use case. (Absent any direction I'll try the former.)

As its name suggests, it handle Helm releases. So if you plan to work on this kind of "actual" resource, it sounds legit the stay on the same "logical" resource. Otherwise you could create and manage two kind of "logical" resources pointing to the same "actual" resource.

Also it would be nice to know if this provider is going to stay MPL-licensed or if it also is going to be re-licensed under the BSL, as my interest in contributing in the latter case is strictly zero.

I'm curious to know why it sounds so much important for you ? Do you know motivation for companies to switch to such kind of licencses ?

PS : I'm not involved in managing this Git repository. This is my own opinion as Terraform and Helm user.

@n-oden
Copy link
Contributor

n-oden commented Sep 7, 2023

So, I was feeling a little motivated...

https://registry.terraform.io/providers/n-oden/helm/latest

This is a fork from the current head commit of terraform-provider-helm, with #1247 applied to it. It is not my intention to maintain a long-lived fork of the provider, but given that hashicorp is notoriously slow at reviewing external PRs, I figured it might be useful to some of the people who have commented in this issue: if you find that it addresses your use case, please 👍 the pull request!

The TLDR is that you can enable idempotent installs that are safe against several of the failure modes described in this issue by adding an upgrade block to the resource:

upgrade {
  enable  = true
  install = true
}

This triggers behavior that is, as close as I can make it, identical to running helm upgrade --install myreleasename repo/chart using the helm CLI: the release will be installed if it is not there already, but if it is, helm will attempt to "upgrade" it to the configuration specified in the resource and then save the results in terraform state. This should be proof against situations where either a previous run of the provider was interrupted in a way that prevented cleanup, or where an external system (e.g. CI/CD) has installed the release before terraform could be run.

Use this entirely at your own risk, and only after reading the documentation and ideally the pull request itself to understand the behavior being enabled. Using this in a production environment before any of the actual provider maintainers have commented is emphatically not recommended-- even if the PR is eventually merged, the semantics (and hence the saved state) might change during the review process.

@n-oden
Copy link
Contributor

n-oden commented Jan 17, 2024

To my intense frustration, the review process for #1247 appears to have entirely stalled out since September: I've made repeated appeals both in the PR and through back channels, to no avail. I've addressed all of the comments made thus far, but no one seems to be particularly interested in getting this merged or even reliably approving the automatic tests.

So while all of the same caveats I made previously still apply, I have made release v0.0.2 of n-oden/terraform-provider-helm available via the public terraform registry. This is built from the current main/HEAD commit of the upstream repo, with #1247 cherry-picked on top. At least in theory, this should be a drop-in replacement for the upstream provider, but one that enables "upgrade mode" installs:

https://registry.terraform.io/providers/n-oden/helm/0.0.2

If anyone following this issue is in a position to spur any motion from inside hashicorp (even if it's a flat rejection so that I can determine if maintaining a public fork would be useful to people beside myself), I would be eternally grateful.

@vineet-cactus
Copy link

I was running helm via terraform and stuck with the same issue.

This was resolved with deleting the secret

kubectl -n ${NAMESPACE} delete secret -lname=${HELM_RELEASE}

@AFriemann
Copy link

The possible solution to import that existing release to terraform state $ terraform import helm_release.example default/example-name. https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#import

we've actually been stuck trying this - import the resource, terraform is proposing some changes, apply - and the same error appears again :(

@c4milo
Copy link

c4milo commented May 17, 2024

The provider should probably allow us to force re-using the name. We run into this quite frequently too during errors.

@n-oden
Copy link
Contributor

n-oden commented Aug 13, 2024

I'm happy to announce that #1247 has been merged; the next release of the provider will have an upgrade_install attribute for the helm_release resource, which should allow successful installation in many/most of these scenarios.

@BBBmau
Copy link
Contributor

BBBmau commented Aug 14, 2024

closing this since the latest version release introduces upgrade_install which should assist with the original issue.

Please feel free to reopen this if that's not the case

Release v2.15.0

@BBBmau BBBmau closed this as completed Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests