Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform lifecycle of a job that has ttl_seconds_after_finished set #2531

Open
stefan-fast opened this issue Jun 21, 2024 · 0 comments · May be fixed by #2596
Open

Terraform lifecycle of a job that has ttl_seconds_after_finished set #2531

stefan-fast opened this issue Jun 21, 2024 · 0 comments · May be fixed by #2596
Assignees
Labels
Milestone

Comments

@stefan-fast
Copy link

This is basically a copy of issue #2059, which unfortunately went stale due to a lack of activity. The issue is still relevant and affecting me.

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: v1.5.7
Kubernetes Provider version: v2.23.0
Kubernetes version: v1.28.9

Terraform configuration

terraform {
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.18.0"
    }
  }

  required_version = ">= 1.4.0"
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}


variable "test" {
  type    = number
  default = 1
}

resource "terraform_data" "kubernetes_job_control" {
  input = var.test
}

resource "kubernetes_job" "mock_job" {
  lifecycle {
    replace_triggered_by = [
      terraform_data.kubernetes_job_control
    ]
  }
  metadata {
    name = "mock-job"
  }
  spec {
    ttl_seconds_after_finished = 0
    template {
      metadata {}
      spec {
        container {
          name  = "mock-job"
          image = "hello-world"
        }
        restart_policy = "Never"
      }
    }
    backoff_limit = 0
  }
  wait_for_completion = true
}

Question

Hello,

I have a kubernetes_job that sets up some initial configuration in my db when the database is created, for that I did lifecycle { replace_triggered_by = [ my_database_resource.id ] }. The thing is, I don't want this job to stick around in the cluster so I've set ttl_seconds_after_finished=0. I thought that by setting the lifecycle block the job would only be applied whenever I need to recreate the database, but because the job deletes itself from cluster it is recreated in every terraform apply that I do. It looks like a bug to me but I think it would be better to ask for opinions here first.

The code I pasted here is a simplified terraform project to reproduce this problem in case someone wants to try it locally. The principle is the same, I have a variable controlling the lifecycle and the job's ttl_seconds_after_finished set to 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants