Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion - use the kubernetes provider to remove dependency on kubectl and config #353

Closed
antonosmond opened this issue Apr 17, 2019 · 6 comments

Comments

@antonosmond
Copy link

antonosmond commented Apr 17, 2019

This is just a suggestion and not an issue but would it not make sense to use the official terraform kubernetes provider to provision things like the aws_auth configmap instead of relying on null_resources, kubectl and a local kubeconfig file?

I have done something like this in the past:

provider "kubernetes" {
  version                = "~> 1.5.0"
  host                   = "${data.aws_eks_cluster.main.endpoint}"
  cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.main.certificate_authority.0.data)}"
  token                  = "${data.aws_eks_cluster_auth.main.token}"
  load_config_file       = false
}

// wait until the cluster endpoint is ready
resource "null_resource" "endpoint_waiter" {
  triggers {
    endpoint = "${data.aws_eks_cluster.main.endpoint}"
  }

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]

    environment = {
      ENDPOINT = "${replace(data.aws_eks_cluster.main.endpoint, "https://", "")}"
    }

    command = <<EOF
count=10
interval=30
while [[ $count -gt 0 ]]; do
  if nc -z "$ENDPOINT" 443; then
    exit 0
  fi
  count=$((count-1))
  if [[ $count -eq 0 ]]; then
    exit 1
  fi 
  sleep $interval
done
EOF
  }
}

resource "kubernetes_config_map" "aws_auth" {
  depends_on = [
    "null_resource.endpoint_waiter",
  ]

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  data {
    mapRoles = "${data.template_file.map_roles.rendered}"
  }
}
@stijndehaes
Copy link
Contributor

stijndehaes commented Apr 22, 2019

I have made a PR for this. I have tested and the wait until cluster endpoint is ready is not needed as the creation of the eks clusters waits for this automatically.

@stijndehaes
Copy link
Contributor

This is the PR: #355

@antonosmond
Copy link
Author

I added the wait because I had inconsistent results i.e. it worked sometimes but not others. If you're confident it works reliably without the wait then I see no problem with removing it. Thanks for picking this up and submitting the PR - I was planning to do it myself but easter weekend got in the way and you beat me to it! 👍

@stale
Copy link

stale bot commented Jan 3, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jan 3, 2020
@max-rocket-internet
Copy link
Contributor

This is done

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 29, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants