Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot deploy using Helm provider since 2.9 #1064

Closed
kzgrzendek opened this issue Feb 14, 2023 · 5 comments
Closed

Cannot deploy using Helm provider since 2.9 #1064

kzgrzendek opened this issue Feb 14, 2023 · 5 comments
Labels

Comments

@kzgrzendek
Copy link

kzgrzendek commented Feb 14, 2023

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: Latest terraform:light docker image (digest : sha256:e581888de7fc094f49186fad27d9e0f216bf1d0a5a12d13ff940b509adbf7f19)
Provider version: 2.9
Kubernetes version: 1.24.8-1

Affected Resource(s)

  • helm_release

Terraform Configuration Files

main module :

terraform {
  backend "pg" {
  }

  required_providers {
    helm = {
      source = "hashicorp/helm"
      version = "~> 2.8.0"
    }
  }

helm release child module
}

resource "helm_release" "redcap-ext" {
  name                = "xxx"
  repository          = "xxxx"
  repository_username = var.chart_repo_infra_user
  repository_password = var.chart_repo_infra_passwd
  chart               = "xxx"
  version             = "xxxx"
  devel               = true
  namespace           = "xxxx"
  create_namespace    = true

  values = [
    file(var.values_path)
  ]
}

Debug Output

https://gist.github.com/kzgrzendek/f614e6a9ff69042b5e26c315e2d0ab37

Panic Output

No panic

Steps to Reproduce

  1. Configuring whatever Helmchart release (eg. ingress-nginx) with the provider in version 2.9
  2. terraform init -input=false
  3. terraform plan -out=tfplan-bootstrap -input=false
  4. terraform apply -input=false tfplan-bootstrap

Expected Behavior

Release should deploy normally without issues

Actual Behavior

Deployment is falling with the following error message :

Plugin did not respond

Important Factoids

  • Deployed on an managed K8S Cluster in OVH Public Cloud
  • Downgrading Helm provider plugin to 2.8 solved the issue

References

None

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@kzgrzendek kzgrzendek added the bug label Feb 14, 2023
@github-actions github-actions bot removed the bug label Feb 14, 2023
@BBBmau
Copy link
Contributor

BBBmau commented Feb 15, 2023

Hello! Thank you for opening this issue @kzgrzendek , could you provide the terraform log output when trying to deploy? You can get the output by running TF_LOG=trace terraform apply

@dzulfiikar
Copy link

dzulfiikar commented Feb 21, 2023

TF_LOG=trace terraform apply

Hello @BBBmau
I do have the same problem, here is the logfile.
pastebin

@BBBmau
Copy link
Contributor

BBBmau commented Mar 2, 2023

@Dzulfikar-git

Based on the logfile i noticed that aws credentials aren't fully configured.

2023-02-21T08:03:35.298Z [WARN]  unexpected data: registry.terraform.io/hashicorp/helm:stderr="Unable to locate credentials. You can configure credentials by running "aws configure"."

This could be the reason for your problem but may not relate to @kzgrzendek issue.

@manumbs
Copy link

manumbs commented Mar 17, 2023

@kzgrzendek in my scenario, the problem was related to a higher volume of CRDs (600+) in our K8Ss clusters

after adjusting the burst_limit for Helm provider from 100 to 1000, the problem was gone

this feature was introduced in 2.9.0 #1012

Copy link

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@github-actions github-actions bot added the stale label Mar 17, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants