You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attempted to upgrade from 2.29.0 to 2.30.0 and suddenly the provider is throwing errors on a terraform plan. Looking at the release notes here, I'm not seeing anything relevant to our current configuration that would produce this error. Simply downgrading back to 2.29.0 is sufficient as a workaround. Also worth noting, this only takes place during initial cluster creation. After a successful run on 2.29.0, I can upgrade to 2.30.0 and everything works fine.
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 17, in provider "kubernetes":
│ 17: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
Issue is only present on new builds, if I am running a terraform plan against an environment that already has a cluster, I get no error.
Terraform Version, Provider Version and Kubernetes Version
tf plan failing due to "invalid configuration" in provider
Terraform Configuration Files
Provider definition
provider"kubernetes" {
host=data.aws_eks_cluster.default.endpointcluster_ca_certificate=base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version="client.authentication.k8s.io/v1"command="aws"# This requires the awscli to be installed locally where Terraform is executedargs=["eks", "get-token", "--cluster-name", data.aws_eks_cluster.default.name, "--region", var.region]
env={
AWS_PROFILE = var.profile
}
}
}
What should have happened?
A successful plan to create an EKS cluster.
Actual Behavior
What actually happened?
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 17, in provider "kubernetes":
│ 17: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
Community Note
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
The text was updated successfully, but these errors were encountered:
Attempted to upgrade from
2.29.0
to2.30.0
and suddenly the provider is throwing errors on aterraform plan
. Looking at the release notes here, I'm not seeing anything relevant to our current configuration that would produce this error. Simply downgrading back to2.29.0
is sufficient as a workaround. Also worth noting, this only takes place during initial cluster creation. After a successful run on2.29.0
, I can upgrade to2.30.0
and everything works fine.Issue is only present on new builds, if I am running a
terraform plan
against an environment that already has a cluster, I get no error.Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
tf plan failing due to "invalid configuration" in provider
Terraform Configuration Files
Expected Behavior
What should have happened?
A successful plan to create an EKS cluster.
Actual Behavior
What actually happened?
Community Note
The text was updated successfully, but these errors were encountered: