Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_container_cluster requires default-max-pods-per-node setting to support flexible pod CIDR #2851

Closed
bluemalkin opened this issue Jan 9, 2019 · 3 comments

Comments

@bluemalkin
Copy link

bluemalkin commented Jan 9, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.

Description

To take advantage of small CIDR for pod ranges currently in beta (https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr), the maximum number of pods per node needs to be set-able for the google_container_cluster resource.

When using VPC native with ip_allocation_policy I want to set a smaller cluster_secondary_range_name pod range. The default is /19. However from the Google doc link:

If you do not configure the maximum number of Pods per node, a /24 CIDR range is used, and each node is assigned 256 IP addresses.

Essentially that means we cannot take advantage of flexible pod cidr until we can change the max number of pods per node on the default node pool.

I actually have remove_default_node_pool = "true" set and use a google_container_node_pool resource with max_pods_per_node attribute set. But there is no way to currently stop or limit the default pool that gets deleted afterwards.

New or Affected Resource(s)

  • google_container_cluster

Potential Terraform Configuration

resource "google_container_cluster" "ops" {
  provider = "google-beta"

  name = "ops"
  zone = "australia-southeast1"

  network            = "default"
  subnetwork         = "default-subnet"
  min_master_version = "${data.google_container_engine_versions.versions.latest_master_version}"
  node_version       = "${data.google_container_engine_versions.versions.latest_node_version}"
  initial_node_count = 1

  ip_allocation_policy {
    services_secondary_range_name = "default-services"
    cluster_secondary_range_name  = "default-pods"
  }

  remove_default_node_pool = "true"

  private_cluster_config {
    enable_private_endpoint = false
    enable_private_nodes    = true
    master_ipv4_cidr_block  = "10.0.0.0/28"
  }

  master_authorized_networks_config {
    cidr_blocks = [
      {
        cidr_block   = "0.0.0.0/0"
        display_name = "world"
      },
    ]
  }
}

resource "google_container_node_pool" "ops" {
  provider = "google-beta"

  name               = "ops"
  region             = "australia-southeast1"
  cluster            = "${google_container_cluster.ops.name}"
  version            = "${data.google_container_engine_versions.versions.latest_node_version}"
  initial_node_count = 1
  max_pods_per_node  = 32

  node_config {
    machine_type = "n1-standard-2"
    image_type   = "COS"

    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
      "https://www.googleapis.com/auth/cloud-platform",
    ]
  }

  autoscaling {
    min_node_count = 0
    max_node_count = 2
  }

  management {
    auto_repair  = "true"
    auto_upgrade = "true"
  }

  depends_on = ["google_container_cluster.ops"]
}

Error

From the Google console:

Status details
(1) deploy error: Not all instances running in IGM after 35m6.486038234s. Expect 1. Current errors: [IP_SPACE_EXHAUSTED]: Instance ‘xxxxx’ creation failed: IP space of ‘projects/xxxx/regions/australia-southeast1/subnetworks/xxxx’ is exhausted. - ; (2) deploy error: Not all instances running in IGM after 35m9.674823813s. Expect 1. Current errors: [IP_SPACE_EXHAUSTED]: Instance ‘xxxx’ creation failed: IP space of ‘projects/xxxx/regions/australia-southeast1/subnetworks/xxx’ is exhausted. - ; .
@ghost ghost added the enhancement label Jan 9, 2019
@bluemalkin bluemalkin changed the title google_container_cluster requires default-max-pods-per-node setting to support custom pod CIDR google_container_cluster requires default-max-pods-per-node setting to support flexible pod CIDR Jan 9, 2019
@emilymye
Copy link
Contributor

Funny thing - we just submitted this a few days ago! See hashicorp/terraform-provider-google-beta#320

@bluemalkin
Copy link
Author

@emilymye thanks ! Any ETA on making a release ?

@ghost
Copy link

ghost commented Feb 9, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Feb 9, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants