Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow config_path to be computed #142

Closed
chrisjtwomey opened this issue Mar 14, 2018 · 10 comments
Closed

Allow config_path to be computed #142

chrisjtwomey opened this issue Mar 14, 2018 · 10 comments

Comments

@chrisjtwomey
Copy link

chrisjtwomey commented Mar 14, 2018

I have a resource from a provider that creates a K8s cluster and another that downloads it's kube config. I can't seem to configure the provider with a config_path that will be computed at a later time ie. after the cluster is created.

Terraform Version

v0.11.3

Affected Resource(s)

  • provider kubernetes

Terraform Configuration Files

I've simplified the example below but this is the flow:

resource "container_cluster" "test_cluster" {
  name         = "${var.cluster_name}"
  datacenter   = "${var.datacenter}"
  machine_type = "${var.machine_type}"
  public_vlan_id = "${var.public_vlan_id}"
  private_vlan_id = "${var.private_vlan_id}"
  no_subnet    = true
  workers = "${var.workers[var.num_workers]}"
}

data "container_cluster_config" "test_cluster_config" {
  depends_on = ["container_cluster.test_cluster"]
  cluster_name_id = "${container_cluster.test_cluster.name}"
}

provider "kubernetes" {
   config_path = "${data.container_cluster_config.test_cluster_config.config_file_path}"
}

resource "kubernetes_namespace" "test_namespace" {
  depends_on = ["data.container_cluster_config.test_cluster_config"]
  metadata {
    name = "test-ns"
  }
}
...
...

Expected Behavior

Kubernetes provider should be instantiated and load config_path after the data.container_cluster_config.test_cluster_config.config_file_path is available

Actual Behavior

Provider seems to be prematurely instantiating itself with no config.

$ terraform apply
...
...
Error: Error running plan: 1 error(s) occurred:

* provider.kubernetes: Failed to load config (; default context): invalid configuration: default cluster has no server defined

Steps to Reproduce

The general flow above is the steps to reproduce. Basically try to instantiate the provider with a config_path that has yet to be generated ie. computed after the cluster resource is created.

References

This is a slightly related issue: hashicorp/terraform#2430

@walterdolce
Copy link

I just experienced this in a similar scenario: I create a GKE cluster first, then I want to create a k8s resource within it (in this case s kubernetes_secret), but Terraform/the provider errors with:

module.core_application_sqlite_database_backup_generator_service_account_secrets.provider.kubernetes: Failed to load config (/path/to/my/user/.kube/config; overriden context; cluster: core-application): cluster "core-application" does not exist

As the kubernetes_secret contains referenced data coming from the GKE cluster resource, I would naturally expect to have Terraform/the provider to respect the dependency tree/resolution logic.

@bcollard
Copy link

bcollard commented Nov 7, 2018

for people stuck here, try not forcing dependencies between resources.
In the exemple, what happens if @Ikradex removes line #12 ?
depends_on = ["container_cluster.test_cluster"]
Terraform manages dependencies better on itself.

Dependencies are implicit when using references to resources within variables and it's better like that.

@vpereira01
Copy link

for other people stuck here: you can have this error message if you have ~/.kube/config already initialized with gcloud container clusters get-credentials project. Remove the ~/.kube/config and you're good to go.

@pvormittag
Copy link

@vpereira01 You can also set load_config_file to false on the provider, which may be a more practical choice then deleting your ~/.kube/config setup.

Here's a snippet of a setup I use which allows both terraform and kubectl to co-exist.

provider "kubernetes" {
  version          = "~> 1.3"
  load_config_file = false

  host                   = "https://${data.google_container_cluster.test_cluster.endpoint}"
  token                  = "${data.google_client_config.current.access_token}"
  cluster_ca_certificate = "${base64decode(data.google_container_cluster.test_cluster.master_auth.0.cluster_ca_certificate)}"
}

@ydewit
Copy link

ydewit commented Jan 24, 2019

@pvormittag thanks for the pointer. Too bad AWS requires the use of a custom authenticator so I can't get around the generation of a kubeconfig file and hope it is not deleted (terraform doesn't seem to re-generate the local_file when the file is no longer there and there are no other changes.

@syst0m
Copy link

syst0m commented Feb 25, 2019

I hit the same issue on AWS EKS.

provider "helm" {
  version        = "~> 0.7.0"
  install_tiller = true

  #  service_account = "tiller"
  #  namespace       = "kube-system"
  service_account = "${module.habito-eks.service_account}"

  namespace    = "${module.habito-eks.namespace}"
  tiller_image = "gcr.io/kubernetes-helm/tiller:v2.11.0"

  kubernetes {
    config_path = "${module.habito-eks.kubeconfig_filename}"
  }
}

provider "kubernetes" {
  version     = ">= 1.4.0"
  config_path = "${module.habito-eks.kubeconfig_filename}"
}
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.map_users[0]: Refreshing state...
data.template_file.map_users[1]: Refreshing state...
data.template_file.map_roles: Refreshing state...
data.template_file.map_accounts: Refreshing state...
data.aws_availability_zones.available: Refreshing state...
data.aws_iam_policy_document.cluster_assume_role_policy: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
data.aws_iam_policy_document.workers_assume_role_policy: Refreshing state...
data.aws_region.current: Refreshing state...
data.aws_ami.eks_worker: Refreshing state...

------------------------------------------------------------------------

Error: Error running plan: 1 error(s) occurred:

* module.habito-eks.provider.kubernetes: Failed to load config (; default context): invalid configuration: no configuration has been provided

@crubier
Copy link

crubier commented Mar 19, 2019

Same error here using aws eks module and kubernetes provider.

Error: Error refreshing state: 1 error(s) occurred:

* provider.kubernetes: Failed to load config (; default context): invalid configuration: no configuration has been provided

Does anyone here have a solution ? Right now it means I can't deploy automatically, I have to remove some files first, deploy once, and then add the kubernetes related files and then deploy a second time.

Edit: found a potential workaround, not sure if it works

terraform-aws-modules/terraform-aws-eks#275 (comment)

@jhoblitt
Copy link
Contributor

jhoblitt commented May 2, 2019

What version of tf and the provider are folks seeing this issue with? I have deployments on GKE and EKS that create the cluster then use the kubernetes provider and I've never seen this problem.

@paultyng
Copy link
Contributor

paultyng commented Aug 5, 2019

This seems like the upstream progressive apply issue: hashicorp/terraform#4149

You cannot currently (reliably) chain together a provider's config with the output of a resource.

@paultyng paultyng closed this as completed Aug 5, 2019
@jakubigla
Copy link

jakubigla commented Dec 25, 2019

I get this error with full apply, so don't think it's related to progressive apply issue.

However I got this working by doing this:

provider "kubernetes" {
  version     = "~> 1.10"

  load_config_file = module.eks.kubeconfig_filename != "" ? true : false
  config_path      = module.eks.kubeconfig_filename
}

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.