-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: cannot re-use a name that is still in use #425
Comments
I bumped into the error after cancelling a previous run (the cloud provider stack on fulfilling a PVC request). That left some orphan resources (svc,cm,sa,...), after deleting them I could re-run the apply just fine. |
@hazcod did you manage to solve this? I'm running into the same issue |
@aqabawe not really :-( |
I had the same problem, and after exploring all kinds of resources in k8s, I found a secret "sh.helm.release.v1.spa-demo.v1" that had been left behind after a partial deployment of my app "spa-demo" with a crashing container. |
workaround is definitely as stated in previous comment: search and delete all related helm resource by name: secrets, services, deployments, etc. |
Same here... I started with
|
The possible solution to import that existing release to terraform state |
This happens on a newly built cluster, too. I suspect this is more of a |
Not sure if it can help, but I just had the same error when the AWS Token expired in the middle of an apply (sigh). Terraform: v0.14.8 I found this that solved: helm/helm#4174 (comment)
|
Thank you Thank you! In my case, the AWS Token expired after the Helm metadata Secret was created, but before the Helm release was installed. On subsequent runs, Helm would see the Secret metadata and error with:
Even though the Helm release was not installed. This was really confusing. Helm should check the metadata Secret AND the existence of the release to determine if the release is being re-used... |
This has been open for over a year. Are you planning to fix this? |
I ran into that a couple of times ... usually it relates to a terraform run that was interrupted or something similar ... as far as I know helm creates secrets to keep track of the version of a deployment ... as @dniel and @marcellodesales already mentioned deleting release related resources - most times it's just the |
Had a similar issue. Look for and remove any Helm related secrets. |
setting atomic true helps with this but it makes debugging failed deployments annoying since it deletes all the resources. |
When looking for resources left behind it might be handy to make use of the labels Helm uses.
|
This helped us find all the resources attached to a specific Helm release (source):
|
same issue here |
This provider seems unusable because of this issue. |
Please can we update the provider to list exactly what resources it is conflicting with, it took ages to figure out that it left over some secrets from a botched rollout. |
Have you checked your state file? Is it a remote-state or local-state. It's always |
Also having this issue w/ the following versions:
|
It happens for me after timeout / auth failure againt K8S API during Helm processing. However, taking a look at release state is properly displayed in You still have to manually |
Faced this exact issue,ran the command |
In my case I didn't have any Helm resources in my cluster (using What fixed it for me was deleting |
I think it's worth pointing out that, at least from my perspective, this boils down to the provider's behavior being non-intuitively different than the helm CLI. If I'm using the CLI directly, it is safe, normal, and entirely expected to do something approximating:
And in nearly all possible states of the chart on the cluster (not yet installed, fully installed, installed but some resources missing/deleted) helm will attempt to bring the chart up to the latest version and create all of the chart's resources. But as far as I can tell, there is no way to approximate this behavior with the helm provider. It is, effectively, running I'm going to work up a pull request to address this, but it would be good to know in advance if the maintainers here have an opinion about what kind of approach they'd like to see: a set of attributes to the Also it would be nice to know if this provider is going to stay MPL-licensed or if it also is going to be re-licensed under the BSL, as my interest in contributing in the latter case is strictly zero. |
That's Terraform way and on this is the expected behavior. You must adopt/import externally managed resources or otherwise it is a data. Try to create manually a VM on your Cloud provider, and then try to create it from Terraform, you will get the same kind of duplication error.
This is not the issue here. In this issue, Helm release is entirely managed by Terraform, but you still encounter the issue.
As its name suggests, it handle Helm releases. So if you plan to work on this kind of "actual" resource, it sounds legit the stay on the same "logical" resource. Otherwise you could create and manage two kind of "logical" resources pointing to the same "actual" resource.
I'm curious to know why it sounds so much important for you ? Do you know motivation for companies to switch to such kind of licencses ? PS : I'm not involved in managing this Git repository. This is my own opinion as Terraform and Helm user. |
So, I was feeling a little motivated... https://registry.terraform.io/providers/n-oden/helm/latest This is a fork from the current head commit of terraform-provider-helm, with #1247 applied to it. It is not my intention to maintain a long-lived fork of the provider, but given that hashicorp is notoriously slow at reviewing external PRs, I figured it might be useful to some of the people who have commented in this issue: if you find that it addresses your use case, please 👍 the pull request! The TLDR is that you can enable idempotent installs that are safe against several of the failure modes described in this issue by adding an upgrade {
enable = true
install = true
} This triggers behavior that is, as close as I can make it, identical to running Use this entirely at your own risk, and only after reading the documentation and ideally the pull request itself to understand the behavior being enabled. Using this in a production environment before any of the actual provider maintainers have commented is emphatically not recommended-- even if the PR is eventually merged, the semantics (and hence the saved state) might change during the review process. |
To my intense frustration, the review process for #1247 appears to have entirely stalled out since September: I've made repeated appeals both in the PR and through back channels, to no avail. I've addressed all of the comments made thus far, but no one seems to be particularly interested in getting this merged or even reliably approving the automatic tests. So while all of the same caveats I made previously still apply, I have made release v0.0.2 of n-oden/terraform-provider-helm available via the public terraform registry. This is built from the current main/HEAD commit of the upstream repo, with #1247 cherry-picked on top. At least in theory, this should be a drop-in replacement for the upstream provider, but one that enables "upgrade mode" installs: https://registry.terraform.io/providers/n-oden/helm/0.0.2 If anyone following this issue is in a position to spur any motion from inside hashicorp (even if it's a flat rejection so that I can determine if maintaining a public fork would be useful to people beside myself), I would be eternally grateful. |
I was running helm via terraform and stuck with the same issue. This was resolved with deleting the
|
we've actually been stuck trying this - import the resource, terraform is proposing some changes, apply - and the same error appears again :( |
The provider should probably allow us to force re-using the name. We run into this quite frequently too during errors. |
I'm happy to announce that #1247 has been merged; the next release of the provider will have an |
closing this since the latest version release introduces Please feel free to reopen this if that's not the case |
Terraform Version
v0.12.21
Affected Resource(s)
Terraform Configuration Files
https://github.com/ironPeakServices/infrastructure/tree/master/modules/helm
Debug Output
https://github.com/ironPeakServices/infrastructure/runs/471718289?check_suite_focus=true
Expected Behavior
successful terraform apply
Actual Behavior
Steps to Reproduce
terraform apply
in CI/CD of https://github.com/ironPeakServices/infrastructureImportant Factoids
n/a
References
https://github.com/terraform-providers/terraform-provider-helm/blob/cfda75d3bd4770fa49a2ef057fe89eb5b8d8eb69/vendor/helm.sh/helm/v3/pkg/action/install.go#L358
The text was updated successfully, but these errors were encountered: