-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wait before powering on the vm? #767
Comments
Hey everyone I managed to work this around doing this: resource "vsphere_virtual_machine" "vm" { so each next VM will sleep for 20 the one after for 30 and so.. so by allowing 10 s between power on calls, we avoid this issue and still benefit from having parallel runs.. it would be nice if we have this as a feature in the vsphere provider though! |
I'm running into this today. The local-exec doesn't seem to help. It would be ideal to either give the resource a retry or a mandatory wait prior to power-on. |
the trick is to queue the power on operations one by one, so start by sleep 30 for the first one and then sleep 40 for the next one and 50 for the one after and so on. the ideal solution is the provider itself to have a kind of scheduler to organize the power on operations and make sure the VM is powered on before moving to the next one and avoid the collision happening trying powering all of them at the same time! |
Thanks @aelnaggar and @ruckc for reporting the problem. My best guess is that the issue is related to the power on waiters for the vApp container and the Terraform provider trying to manually power on the VMs too early. Can you please post your Terraform config to help with troubleshooting and reproducing the issue? |
@bill-rich I think that's what exactly happens, thanks for taking the time to investigate this here are my configs main.tf terraform { data "vsphere_datacenter" "dc" { data "vsphere_datastore" "datastore" { data "vsphere_compute_cluster" "cluster" { data "vsphere_network" "network" { data "vsphere_virtual_machine" "template" { resource "vsphere_vapp_container" "vapp_container" { } resource "vsphere_virtual_machine" "oracle" { network_interface { disk { clone { customize {
} resource "vsphere_virtual_machine" "platform" { network_interface { disk { clone { customize {
} resource "vsphere_virtual_machine" "proxy" { network_interface { disk { clone { customize {
} vars.tf variable "virtual_machine_dns_servers" { variable "Network_ID" { variable "virtual_machine_domain" { variable "oracle_machine_count" { variable "oracle_machineNamesList" { variable "platform_machine_count" { variable "platform_machineNamesList" { variable "proxy_machine_count" { variable "proxy_machineNamesList" { |
I can't post my config, but it's essentially like his. The moment the clone is finished, it instantly tries to power-on the vm and sometimes it can, sometimes it can't. In my build of 5 VMs, its a 1-4 of them power on, but its completely inconsistent which ones. It's like vSphere API says the clone is successful before the VM is actually ready to power on. Ideally, i'd ask for either a configurable sleep interval between clone success and power-on or a retry count w/ sleep interval |
any updates on this @bill-rich ? |
I crashed into this today. I'm using terraform v0.12.18 and v1.14 of vsphere provider. I tried using vsphere_vapp_entity to control the start groups / powerOn delays. none of those settings were reflected in the settings of the created vApp. v1.13 of the provider consistently malfunctioned for every VM in the vApp. v1.14 malfunctions for a random subset of the VMs in the vApp group. I tried setting the start_action = "none" with both versions to see if I could get the |
Did a bit more research / wall-head-banging. Looked at the unit tests for vsphere_vapp_entity and realized that I was using the terraform resource id (id) and not the managed resource id (moid) in my @aelnaggar @ruckc u might try using a |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Hi,
First I want to thank you for the amazing work, I just had a strange case.. I think it might be related to vSphere itself, but maybe you can help me here..
So I just create a vapp with some VMs controlled by it,
and my problem is that the TF tries to power on the VM too fast, which result in an error powering on the VM in the current state. those VMs are cloned from a template and then I send customization on top.
I also notice when I use -parallelism=1 it works well but to create a big number of VMs, I want to take advantage of parallelism
is there a way to tell Terraform to wait before powering the VMS?
I tried boot_delay and overriding HA parameter but nothing works, any help would be appreciated.. Thanks :)
Terraform Version
Terraform v0.11.14
vSphere Provider Version
.
├── provider.ansible
├── provider.external
└── provider.vsphere
Affected Resource(s)
vsphere_virtual_machine
If this issue appears to affect multiple resources, it may be an issue with
Terraform's core, so please mention this.
Expected Behavior
waiting before trying to powering on the VM
Actual Behavior
Terraform trying to power on too fast
The text was updated successfully, but these errors were encountered: