Skip to content
This repository has been archived by the owner on Mar 25, 2022. It is now read-only.

terraform apply forces new resource #162

Open
strixBE opened this issue Feb 20, 2019 · 11 comments
Open

terraform apply forces new resource #162

strixBE opened this issue Feb 20, 2019 · 11 comments

Comments

@strixBE
Copy link

strixBE commented Feb 20, 2019

Terraform Version

terraform 0.11.4
provider.opc 1.3.2

Affected Resource(s)

  • opc_instance

Terraform Configuration Files

resource "opc_compute_instance" "test" {
[...]
}

Expected Behavior

When I do not change anything in my configuration, I expect a second terraform apply not to do anything.

Actual Behavior

When I create an instance by using terraform apply and re-run terraform apply it forces a new resource to be created (VM gets deleted and a new one is created).
Gist

Steps to Reproduce

  1. Create configuration for opc_compute_instance
  2. run terraform apply, your VM is being created
  3. run terraform apply, it wants to re-create the VM
@scross01
Copy link
Contributor

Can you provide the Terraform configuration for your instance

@strixBE
Copy link
Author

strixBE commented Feb 20, 2019

resource "opc_compute_storage_volume" "os" {
  name             = "${var.inst["name"]}${var.storageOS["name"]}"
  size             = "${var.storageOS["size"]}"
  bootable         = true
  image_list       = "${var.os["image_list"]}"
  image_list_entry = "${var.os["image_list_entry"]}"
}

resource "opc_compute_instance" "stage" {
  name          = "${var.inst["name"]}${var.inst["name"]}"
  label         = "${var.inst["name"]}"
  shape         = "${var.inst["shape"]}"
  image_list    = "${var.os["image_list"]}"
  hostname      = "${var.inst["name"]}"
  boot_order    = [1]
  desired_state = "inactive"
  ssh_keys      = "${var.ssh_keys}"
  depends_on    = ["opc_compute_storage_volume.os"]

  storage {
    volume = "${var.inst["name"]}${var.storageOS["name"]}"
    index  = 1
  }

  networking_info {
    index              = 0
    vnic               = "${var.inst["name"]}_eth0"
    vnic_sets          = ["${var.net["vnic_sets"]}"]
    is_default_gateway = true
    ip_network         = "${var.net["ip_network"]}"
    dns                = ["${var.inst["name"]}${var.net["dns"]}"]
    nat                = ["${var.net["nat"]}"]
    search_domains     = "${var.search_domains}"
  }
}

@scross01
Copy link
Contributor

Remove the image_list attribute from the opc_compute_instance - this is only needed when creating an instance with local boot. As you are booting from a persistent storage volume the image_list is not required.

Also a suggestion - rather than manually declaring the depends_on, you can reference the storage resource name directly, and Terraform will automatically determine the dependency for you.

  storage {
    volume = "${opc_compute_storage_volume.os.name}"
    index  = 1
  }

@scross01
Copy link
Contributor

desired_state = "inactive"

This is an invalid value for the desired state, to shutdown an instance the option would be desired_state = "shutdown". Note that the desired states is only applied on update, instances are initially created in running state.

@strixBE
Copy link
Author

strixBE commented Feb 21, 2019

Although these are valuable inputs, it does not affect the problem per se. As long as the ID of the VM forces a redeploy, VM will shut down when running terraform apply

@scross01
Copy link
Contributor

Did you try after removing the image_list attribute in the opc_compute_instance. The reapply does not force a new resource for me when trying to replicate your config with that attribute removed.

Note: the only updatable attribute for the opc_compute_instance is the desired_state. If any other attribute has changed then the instance will be re-created.

@strixBE
Copy link
Author

strixBE commented Feb 25, 2019

Ok, we tried several things now:

As soon as the attribute networking_info with the IP configuration or the ssh_keys is set, it forces a redeploy of the VM. We could add these attributes to lifecycle.ignore_changes meta-parameter. But then we could not change these values for a redeploy.

@scross01
Copy link
Contributor

Do you have multiple ssh_keys being passed in, I wonder if it may be the the order of the keys that is causing the force new? See if setting ssh_keys = "${sort(var.ssh_keys)}" makes a difference - same for any of the list attributes under networking_info that have multiple entries.

@strixBE
Copy link
Author

strixBE commented Feb 26, 2019

The ssh_keys and networking_info params are always in the same order.

@mtjakobczyk
Copy link

This may be related to #166
@MrStrix Could you provide us with the output from your plan (something like Actual Behavior in #166 ) ?

@strixBE
Copy link
Author

strixBE commented Apr 24, 2019

I have already provided a full terraform plan output in a gist in my initial comment: https://gist.github.com/MrStrix/494b629c47669f904bac6ecfc18802dc

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants