Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wait before powering on the vm? #767

Closed
aelnaggar opened this issue May 20, 2019 · 10 comments · Fixed by #990
Closed

wait before powering on the vm? #767

aelnaggar opened this issue May 20, 2019 · 10 comments · Fixed by #990
Labels
bug Type: Bug

Comments

@aelnaggar
Copy link

aelnaggar commented May 20, 2019

Hi,

First I want to thank you for the amazing work, I just had a strange case.. I think it might be related to vSphere itself, but maybe you can help me here..

So I just create a vapp with some VMs controlled by it,
and my problem is that the TF tries to power on the VM too fast, which result in an error powering on the VM in the current state. those VMs are cloned from a template and then I send customization on top.

I also notice when I use -parallelism=1 it works well but to create a big number of VMs, I want to take advantage of parallelism

is there a way to tell Terraform to wait before powering the VMS?
I tried boot_delay and overriding HA parameter but nothing works, any help would be appreciated.. Thanks :)

Terraform Version

Terraform v0.11.14

  • provider.ansible v1.0.0
  • provider.external v1.1.2
  • provider.vsphere v1.11.0

vSphere Provider Version

.
├── provider.ansible
├── provider.external
└── provider.vsphere

Affected Resource(s)

  • vsphere_virtual_machine

If this issue appears to affect multiple resources, it may be an issue with
Terraform's core, so please mention this.

### Debug Output
2019-05-20T04:37:13.119-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] Reconfiguring virtual machine "new-project-dev/new-project-dev-ngx-1"
2019-05-20T04:37:13.187-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] Looking for OS family for guest ID "rhel7_64Guest"
2019-05-20T04:37:13.195-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] OSFamily: family for "rhel7_64Guest" is "linuxGuest"
2019-05-20T04:37:13.195-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] Sending customization spec to virtual machine "new-project-dev/new-project-dev-plf-1"
2019-05-20T04:37:13.299-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] OSFamily: family for "rhel7_64Guest" is "linuxGuest"
2019-05-20T04:37:13.299-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] Sending customization spec to virtual machine "new-project-dev/new-project-dev-ora-1"
2019-05-20T04:37:13.385-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] Looking for OS family for guest ID "rhel7_64Guest"
2019-05-20T04:37:13.526-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] OSFamily: family for "rhel7_64Guest" is "linuxGuest"
2019-05-20T04:37:13.526-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:13 [DEBUG] Sending customization spec to virtual machine "new-project-dev/new-project-dev-ngx-1"
2019-05-20T04:37:14.266-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:14 [DEBUG] Powering on virtual machine "new-project-dev/new-project-dev-plf-1"
2019/05/20 04:37:14 [TRACE] dag/walk: vertex "root", waiting for: "provider.vsphere (close)"
2019/05/20 04:37:14 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 04:37:14 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.platform"
2019/05/20 04:37:14 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.platform"
2019/05/20 04:37:14 [TRACE] root: eval: *terraform.EvalWriteState
2019/05/20 04:37:14 [TRACE] root: eval: *terraform.EvalApplyProvisioners
2019/05/20 04:37:14 [TRACE] root: eval: *terraform.EvalIf
2019/05/20 04:37:14 [TRACE] root: eval: *terraform.EvalWriteState
2019/05/20 04:37:14 [TRACE] root: eval: *terraform.EvalWriteDiff
2019/05/20 04:37:14 [TRACE] root: eval: *terraform.EvalApplyPost
2019/05/20 04:37:14 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error occurred:
        * vsphere_virtual_machine.platform: error powering on virtual machine: The operation is not allowed in the current state.

2019/05/20 04:37:14 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error occurred:
        * vsphere_virtual_machine.platform: error powering on virtual machine: The operation is not allowed in the current state.

2019/05/20 04:37:14 [TRACE] [walkApply] Exiting eval tree: vsphere_virtual_machine.platform
2019-05-20T04:37:14.377-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:14 [DEBUG] Powering on virtual machine "new-project-dev/new-project-dev-ora-1"
2019-05-20T04:37:14.530-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 04:37:14 [DEBUG] Powering on virtual machine "new-project-dev/new-project-dev-ngx-1"
vsphere_virtual_machine.oracle: Still creating... (20s elapsed)
vsphere_virtual_machine.proxy: Still creating... (20s elapsed)
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalWriteState
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalApplyProvisioners
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalIf
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalWriteState
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalWriteDiff
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalApplyPost
2019/05/20 04:37:19 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error occurred:
        * vsphere_virtual_machine.oracle: error powering on virtual machine: The operation is not allowed in the current state.

2019/05/20 04:37:19 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error occurred:
        * vsphere_virtual_machine.oracle: error powering on virtual machine: The operation is not allowed in the current state.

2019/05/20 04:37:19 [TRACE] [walkApply] Exiting eval tree: vsphere_virtual_machine.oracle
2019/05/20 04:37:19 [TRACE] root: eval: *terraform.EvalWriteState

Expected Behavior

waiting before trying to powering on the VM

Actual Behavior

Terraform trying to power on too fast

### with -parallelism=1
example when running with -parallelism=1 which works perfectly:

2019-05-20T06:23:14.316-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] applyDeviceChange: Device list before changes: ide-200,ide-201,ps2-300,pci-100,sio-400,keyboard-600,pointing-700,video-500,vmci-12000,pvscsi-1000,cdrom-3002,disk-1000-0,floppy-8000,ethernet-0
2019-05-20T06:23:14.316-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] applyDeviceChange: Device list after changes: ide-200,ide-201,ps2-300,pci-100,sio-400,keyboard-600,pointing-700,video-500,vmci-12000,pvscsi-1000,disk-1000-0,floppy-8000,ethernet-0
2019-05-20T06:23:14.316-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] CdromPostCloneOperation: Post-clone final resource list:
2019-05-20T06:23:14.317-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] CdromPostCloneOperation: Device list at end of operation: ide-200,ide-201,ps2-300,pci-100,sio-400,keyboard-600,pointing-700,video-500,vmci-12000,pvscsi-1000,disk-1000-0,floppy-8000,ethernet-0
2019-05-20T06:23:14.317-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] CdromPostCloneOperation: Device config operations from post-clone: (remove: *types.VirtualCdrom at key 3002)
2019-05-20T06:23:14.317-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] CdromPostCloneOperation: Operation complete, returning updated spec
2019-05-20T06:23:14.317-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] vsphere_virtual_machine (ID = 422ed288-97b4-5b8a-15d5-649efce60015): Final device list: ide-200,ide-201,ps2-300,pci-100,sio-400,keyboard-600,pointing-700,video-500,vmci-12000,pvscsi-1000,disk-1000-0,floppy-8000,ethernet-0
2019-05-20T06:23:14.317-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] vsphere_virtual_machine (ID = 422ed288-97b4-5b8a-15d5-649efce60015): Final device change cfgSpec: (edit: *types.VirtualVmxnet3 at key 4000),(remove: *types.VirtualCdrom at key 3002)
2019-05-20T06:23:14.317-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] Reconfiguring virtual machine "new-project-dev/new-project-dev-ngx-1"
2019-05-20T06:23:14.544-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:14 [DEBUG] Powering on virtual machine "new-project-dev/new-project-dev-ngx-1"
2019/05/20 06:23:15 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.proxy"
vsphere_virtual_machine.proxy: Still creating... (10s elapsed)
2019/05/20 06:23:18 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:18 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.oracle"
2019/05/20 06:23:18 [TRACE] dag/walk: vertex "root", waiting for: "provisioner.local-exec (close)"
2019-05-20T06:23:19.731-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:19 [DEBUG] Fetching properties for VM "new-project-dev/new-project-dev-ngx-1"
2019-05-20T06:23:19.741-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:19 [DEBUG] Skipping IP waiter for VM "new-project-dev/new-project-dev-ngx-1"
2019-05-20T06:23:19.741-0500 [DEBUG] plugin.terraform-provider-vsphere_v1.11.0_x4: 2019/05/20 06:23:19 [DEBUG] Waiting for an available IP address on VM "new-project-dev/new-project-dev-ngx-1" (routable= true, timeout = 5m)
2019/05/20 06:23:20 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:23 [TRACE] dag/walk: vertex "root", waiting for: "provisioner.local-exec (close)"
2019/05/20 06:23:23 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:23 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.oracle"
2019/05/20 06:23:25 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.proxy"
vsphere_virtual_machine.proxy: Still creating... (20s elapsed)
2019/05/20 06:23:28 [TRACE] dag/walk: vertex "root", waiting for: "provisioner.local-exec (close)"
2019/05/20 06:23:28 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.oracle"
2019/05/20 06:23:28 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:30 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:33 [TRACE] dag/walk: vertex "root", waiting for: "provisioner.local-exec (close)"
2019/05/20 06:23:33 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:33 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.oracle"
2019/05/20 06:23:35 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.proxy"
vsphere_virtual_machine.proxy: Still creating... (30s elapsed)
2019/05/20 06:23:38 [TRACE] dag/walk: vertex "root", waiting for: "provisioner.local-exec (close)"
2019/05/20 06:23:38 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:38 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.oracle"
2019/05/20 06:23:40 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vsphere_virtual_machine.proxy"
2019/05/20 06:23:43 [TRACE] dag/walk: vertex "root", waiting for: "provisioner.local-exec (close)"
2019/05/20 06:23:43 [TRACE] dag/walk: vertex "provisioner.local-exec (close)", waiting for: "vsphere_virtual_machine.oracle"
2019/05/20 06:23:43 [TRACE] dag/walk: vertex "provider.vsphere (close)", waiting for: "vsphere_virtual_machine.proxy"
^CInterrupt received.
@aelnaggar aelnaggar changed the title wait before powering on the vm [question] wait before powering on the vm? May 20, 2019
@aelnaggar
Copy link
Author

aelnaggar commented May 20, 2019

Hey everyone I managed to work this around doing this:
`resource "null_resource" "wait_for_vm" {
provisioner "local-exec" {
command = "sleep 30" //add +10 for each vm
}
}

resource "vsphere_virtual_machine" "vm" {
depends_on = ["null_resource.wait_for_vm"]`

so each next VM will sleep for 20 the one after for 30 and so.. so by allowing 10 s between power on calls, we avoid this issue and still benefit from having parallel runs..

it would be nice if we have this as a feature in the vsphere provider though!

@ruckc
Copy link

ruckc commented May 20, 2019

I'm running into this today. The local-exec doesn't seem to help. It would be ideal to either give the resource a retry or a mandatory wait prior to power-on.

@aelnaggar
Copy link
Author

aelnaggar commented May 21, 2019

I'm running into this today. The local-exec doesn't seem to help. It would be ideal to either give the resource a retry or a mandatory wait prior to power-on.

the trick is to queue the power on operations one by one, so start by sleep 30 for the first one and then sleep 40 for the next one and 50 for the one after and so on.
sleeping less than 30 seconds somehow cause it still to fail..

the ideal solution is the provider itself to have a kind of scheduler to organize the power on operations and make sure the VM is powered on before moving to the next one and avoid the collision happening trying powering all of them at the same time!

@aelnaggar aelnaggar changed the title [question] wait before powering on the vm? wait before powering on the vm? May 21, 2019
@bill-rich
Copy link
Contributor

Thanks @aelnaggar and @ruckc for reporting the problem. My best guess is that the issue is related to the power on waiters for the vApp container and the Terraform provider trying to manually power on the VMs too early. Can you please post your Terraform config to help with troubleshooting and reproducing the issue?

@bill-rich bill-rich added bug Type: Bug waiting-response Status: Waiting on a Response labels May 21, 2019
@aelnaggar
Copy link
Author

aelnaggar commented May 22, 2019

@bill-rich I think that's what exactly happens, thanks for taking the time to investigate this here are my configs

main.tf
`provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
allow_unverified_ssl = true
}

terraform {
backend "consul" {
address = "consul.test.com"
scheme = "http"
datacenter = "ds"
path = "terraform_state/stl/prv-test"
}
}

data "vsphere_datacenter" "dc" {
name = "D.S"
}

data "vsphere_datastore" "datastore" {
name = "STLIBOX-NONCRIT-VMFS001"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_compute_cluster" "cluster" {
name = "SERVER_FARM"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {
name = "TEST_DEVOPS"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_virtual_machine" "template" {
name = "RHEL75_BASE_VMTOOLS_80G"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "vsphere_vapp_container" "vapp_container" {
name = "default_env_name"
parent_resource_pool_id = "${data.vsphere_compute_cluster.cluster.resource_pool_id}"
parent_folder_id = "group-v1884419"

}

resource "vsphere_virtual_machine" "oracle" {
count = "${var.oracle_machine_count}"
name = "${var.oracle_machineNamesList[count.index]}"
resource_pool_id = "${vsphere_vapp_container.vapp_container.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 4
memory = 4048
guest_id = "${data.vsphere_virtual_machine.template.guest_id}"
scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
}

disk {
label = "disk0"
size = "80"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}

clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"

customize {
linux_options {
host_name = "${var.oracle_machineNamesList[count.index]}"
domain = "corp.domain.com"
}

  network_interface {
    ipv4_address = "10.26.198.154"
    ipv4_netmask = 22
  }

  ipv4_gateway    = "10.26.198.1"
  dns_suffix_list = ["${var.virtual_machine_domain}"]
  dns_server_list = ["${var.virtual_machine_dns_servers}"]
}

}
}

resource "vsphere_virtual_machine" "platform" {
count = "${var.platform_machine_count}"
name = "${var.platform_machineNamesList[count.index]}"
resource_pool_id = "${vsphere_vapp_container.vapp_container.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 4
memory = 4048
guest_id = "${data.vsphere_virtual_machine.template.guest_id}"
scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
}

disk {
label = "disk0"
size = "80"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}

clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"

customize {
linux_options {
host_name = "${var.platform_machineNamesList[count.index]}"
domain = "corp.domain.com"
}

  network_interface {
    ipv4_address = "10.26.198.155"
    ipv4_netmask = 22
  }

  ipv4_gateway    = "10.26.198.1"
  dns_suffix_list = ["${var.virtual_machine_domain}"]
  dns_server_list = ["${var.virtual_machine_dns_servers}"]
}

}
}

resource "vsphere_virtual_machine" "proxy" {
count = "${var.proxy_machine_count}"
name = "${var.proxy_machineNamesList[count.index]}"
resource_pool_id = "${vsphere_vapp_container.vapp_container.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 4
memory = 4048
guest_id = "${data.vsphere_virtual_machine.template.guest_id}"
scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
}

disk {
label = "disk0"
size = "80"
eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
}

clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"

customize {
linux_options {
host_name = "${var.proxy_machineNamesList[count.index]}"
domain = "corp.domain.com"
}

  network_interface {
    ipv4_address = "10.26.198.156"
    ipv4_netmask = 22
  }

  ipv4_gateway    = "10.26.198.1"
  dns_suffix_list = ["${var.virtual_machine_domain}"]
  dns_server_list = ["${var.virtual_machine_dns_servers}"]
}

}
}`

vars.tf
`variable "vsphere_user" {
default = ""
}
variable "vsphere_password" {
default = ""
}
variable "vsphere_server" {
default = "vspehere.test.com"
}

variable "virtual_machine_dns_servers" {
type = "list"
default = ["10.0.0.70", "10.0.1.70"]
}

variable "Network_ID" {
default = "10.0.198.0"
}

variable "virtual_machine_domain" {
default = "test.com"
}

variable "oracle_machine_count" {
default = 3
}

variable "oracle_machineNamesList" {
default = ["default_env_name-ora-1"]
}

variable "platform_machine_count" {
default = 5
}

variable "platform_machineNamesList" {
default = ["default_env_name-plf-1"]
}

variable "proxy_machine_count" {
default = 6
}

variable "proxy_machineNamesList" {
default = ["default_env_name-ngx-1"]
}`

@ghost ghost removed the waiting-response Status: Waiting on a Response label May 22, 2019
@ruckc
Copy link

ruckc commented May 23, 2019

I can't post my config, but it's essentially like his. The moment the clone is finished, it instantly tries to power-on the vm and sometimes it can, sometimes it can't. In my build of 5 VMs, its a 1-4 of them power on, but its completely inconsistent which ones. It's like vSphere API says the clone is successful before the VM is actually ready to power on.

Ideally, i'd ask for either a configurable sleep interval between clone success and power-on or a retry count w/ sleep interval

@aelnaggar
Copy link
Author

aelnaggar commented Jul 4, 2019

any updates on this @bill-rich ?

@arothste-blk
Copy link
Contributor

I crashed into this today. I'm using terraform v0.12.18 and v1.14 of vsphere provider. I tried using vsphere_vapp_entity to control the start groups / powerOn delays. none of those settings were reflected in the settings of the created vApp. v1.13 of the provider consistently malfunctioned for every VM in the vApp. v1.14 malfunctions for a random subset of the VMs in the vApp group. I tried setting the start_action = "none" with both versions to see if I could get the terraform apply to clone the VMs and not powerOn anything. similar to the aforementioned observation, neither the presence of the vsphere_vapp_entity resources nor the contents of their configurations seem to have any bearing on how either of the versions of the provider (mal)function. any pointers @bill-rich ?

@arothste-blk
Copy link
Contributor

Did a bit more research / wall-head-banging. Looked at the unit tests for vsphere_vapp_entity and realized that I was using the terraform resource id (id) and not the managed resource id (moid) in my vsphere_vapp_entity declarations. That's a bug in the docs at https://www.terraform.io/docs/providers/vsphere/r/vapp_entity.html. I'll PR that.

@aelnaggar @ruckc u might try using a vsphere_vapp_entity to address this.

@ghost
Copy link

ghost commented Apr 18, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 18, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Type: Bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants