-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable #1234
Comments
Up! I'm having the same issue! |
same issue |
Have you tried setting |
I found a "workaround". I was getting this error when I was running In my case, I have to replace:
By:
Then I can successfully run |
This seems to me like a bug as Sincerely, |
Also affects me |
Same here, erroring for
|
Datapoint: tf13, chart import, same problem, same fix (KUBE_CONFIG_PATH) |
frustrating because I also run into connectivity issues also solved by configuring without the file, so how to do both? |
Guys - we have the same issue with TF 0.14.7. Does anyone even bother to take a look at this issue since February? |
Same issue |
Same problem while terraform destroy |
So I ran into this issue today and this article helped explain why we're all facing the issue. I hope this helps someone |
Thanks @edeediong for that link, as the answer is in the comments: Fortunately we can avoid deleting the cluster by removing from the tfstate the resources created by the provider helm or kubernetes: In my case, I had the |
+1 |
2 similar comments
+1 |
+1 |
The article referenced by @edeediong was indeed helpful. When I tried using any of the environmental variable solutions when applying changes (across both the infra and the workloads), the plans did not look right -- it was claiming resources had been deleted. I had separated the infra and the workloads into 2 different modules already, so the solution of using two different Does this put additional burden on the user to plan out applies that cut across infra and workload? Will the user be in the situation that to make a major change/migration smooth could require more than just an apply to each of infra and workload? |
Same issue here, super annoying on Terraform v1.0.0 here the workaround |
Hi for my windows environment it helped when I added the following in powershell.
Then it knew where to find my configuration Regards |
Setting export KUBECONFIG=~/.kube/config for standard K3S Cluster, Fixed the issue. |
The simplest solution is to remove kubernetes resources from the state file and then run the terraform destroy. In my case - I performed the terraform destroy and it delete few resources and I lost the rback to the eks, so terraform could not delete the other kubernetes resources like jenkins helm chart. So i removed those resources from terraform state file (because anyways, if cluster is gone i don't care about installed helm charts), and then i ran terraform distroy. Ex - |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
Why this was closed without any resolution? |
there was no update since 9th August were was provided some hint. If there is no discussion over issue then stale bot is this closing. |
If you feel some question is still valid please open a new issue and describe full case how to replicate problem |
Have the same issue while using Helm with EKS on TF 1.0.10. Setting env variable KUBE_CONFIG_PATH helped. |
I have the same issue with AKS. Can anyone reopen it? |
this modules is for AWS |
I understand, but from the comments, it looks like the issue was not solved. |
This seems to be a problem with the Kubernetes provider, which this module (and the AKS ones out there) make use of. I'd recommend folks looking for fixes/workarounds to check out this issue in the provider repo. |
This happened with me for the first time this week after switching to 0.14.7 (working my way up). The annoying thing is that my configuration is already split in two with my EKS cluster provisioned separately from the resources that are provisioned into the EKS cluster. The provider lookup comes from a |
In my case I was adding dependency in auth module to creation resource module, for example, I have the gke auth module in terraform and the gke module, my gke_auth seems:
So I remove the depends_on and works |
the kube_config is only an option if you actually use kubectl to manage anything on the server, many of us dont or are creating 1 click setups for this |
Is there a way to solve this issue without setting KUBE_CONFIG_PATH? i want to deploy EKS + LoabBalancerController. But to deploy LBC i need to set KUBE_CONFIG_PATH. After deploying EKS, i need to create config file, only after that i can deploy LBC. I want to deploy it without manual actions. |
I'm upgrading from a really old TF spec, and I'm getting this problem when trying to create the EKS config fresh. |
This is still a problem and the issue should be re-opened IMO |
The solution is to have 2 separate states for infrastructure and services applied by helm This means 2 plan & 2 apply You then have to use remote state to fetch anything from the other state which also sucks but it means you can at least have things talking to each other. |
@voycey I don't understand what you mean by that. Can you elaborate? |
I think the issue here is that in not every situation the KUBE_CONFIG_PATH can be used. If you create a fresh EKS cluster, the terraform runs ok without any KUBE_CONFIG_PATH. But then if you run a change suddenly plan starts to fail because it cannot read back some IRSA resource(s) - to do that it needs a correct config. In principle the workaround is simple, right just set an env variable and run plan&apply again. But this cannot be applied in all cases. What about some CD integration? I cannot create a kubeconfig file before the plan if the cluster does not exists. For example in a Github workflow in the Terraform plan step I need to implement branch logic
I do not understand why the provider cannot resolve this on-the-fly? If it is already doing changes on EKS why the subsequent update of the state needs an external kube config? In my case it fails like this in GithubActions:
|
Is this error caused by sort of "expired token" ? const helmProvider = new HelmProvider(this, "helm_provider", <HelmProviderConfig>{
provider,
kubernetes: <HelmProviderKubernetes>{
host: eksConstruct.eksModule.clusterEndpointOutput,
clusterCaCertificate: Fn.base64decode(eksConstruct.eksModule.clusterCertificateAuthorityDataOutput),
token: eksClusterAuth.token,
}
}); I know that module's outputs stores in state file, do we have a way to refresh it before building plan? UPD |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I just updated terraform from 0.11 to 0.12. Since then I started getting errors.
Below is my module eks, Kubernetes provider, and Helm provider.
Kubernetes provider:
Helm Provider
While running Terraform plan. I am getting below errors. Initally I used to create
.kube_config.yaml
and pass it into the providers but now I am not even able to create the .kube_config.yaml :The text was updated successfully, but these errors were encountered: