Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash - Missing network_interface_ids in virtual machine caused Terraform crash and dump #5401

Closed
davidjsanders opened this issue Jan 15, 2020 · 2 comments · Fixed by #5413
Closed

Comments

@davidjsanders
Copy link

davidjsanders commented Jan 15, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version:

Terraform v0.12.19

  • provider.azurerm v1.40.0
  • provider.null v2.1.2
  • provider.random v2.2.1
  • provider.template v2.1.2

Affected Resource(s)

  • azurerm_virtual_machine

Terraform Configuration Files

The line network_interface_ids = var.vm.network-interfaces was causing this issue - var.vm.network-interfaces was set to [""] in error:

resource "azurerm_virtual_machine" "vm-workers" {
  count = var.vm.vm-count

  depends_on = [
  ]

  resource_group_name              = var.vm.rg-name
  vm_size                          = var.vm.size
  availability_set_id              = var.vm.avset-id
  delete_os_disk_on_termination    = var.vm-os-disk.delete-on-terminate
  delete_data_disks_on_termination = var.vm-data-disk.delete-on-terminate
  location                         = var.location

  name = upper(
    format(
      "%s-%01d%s",
      var.vm.name-prefix,
      count.index + 1,
      local.l-random,
    ),
  )

  network_interface_ids = var.vm.network-interfaces

  boot_diagnostics {
    storage_uri = var.vm.boot-sa-uri
    enabled     = var.vm.boot-diags
  }

  storage_image_reference {
    id = format(
      "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Compute/images/%s",
      var.vm-image.subscription-id,
      var.vm-image.rg-name,
      var.vm-image.id
    )
  }

  storage_os_disk {
    name              = var.vm-os-disk.disk-name
    caching           = var.vm-os-disk.caching
    create_option     = var.vm-os-disk.create-option
    managed_disk_type = var.vm-os-disk.disk-type
  }

  os_profile {
    computer_name = lower(
      format(
        "%s-%01d%s",
        var.vm.name-prefix,
        count.index + 1,
        local.l-random
      ),
    )
    admin_username = var.vm.admin-user
    admin_password = var.vm.admin-password
    custom_data    = var.vm.custom-data
  }

  os_profile_linux_config {
    disable_password_authentication = var.vm.password-auth

    ssh_keys {
      path     = "/home/${var.vm.admin-user}/.ssh/authorized_keys"
      key_data = var.public-key
    }
  }

  tags = var.tags
}

Debug Output

First run:

$ terraform apply targets/dev.out
Acquiring state lock. This may take a few moments...
azurerm_virtual_machine.vm-workers[0]: Creating...

Error: rpc error: code = Unavailable desc = transport is closing

Second run:

$ terraform apply targets/dev.out
Acquiring state lock. This may take a few moments...
azurerm_virtual_machine.vm-workers[0]: Creating...

Error: rpc error: code = Unavailable desc = transport is closing


panic: interface conversion: interface {} is nil, not string
2020-01-14T21:02:38.819-0500 [DEBUG]...

Panic Output

An obfuscated log is available here.

Expected Behavior

The azurerm provider should have complained about being passed and empty string as the Network Interface ID and behaved as if the provider detected an empty list, like so:

terraform apply targets/dev.out
Acquiring state lock. This may take a few moments...
random_integer.unique-sa-id: Creating...
random_integer.unique-sa-id: Creation complete after 0s [id=9563]
azurerm_virtual_machine.vm-workers[0]: Creating...

Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="VirtualMachineMustHaveAtLeastOneNetworkInterface" Message="Virtual machine TEST-VM-1 must have at least one network interface." Details=[]

  on virtual-machines.tf line 1, in resource "azurerm_virtual_machine" "vm-workers":
   1: resource "azurerm_virtual_machine" "vm-workers" {

Actual Behavior

Because the network_interface_ids argument was [""], the provider produced the panic result.

Steps to Reproduce

In any code to produce an azurerm_virtual_machine resource, set network_interface_ids to the incorrect value [""] then

  1. terraform plan (works okay and states the correct resources)
  2. terraform apply (fails - 1st time with error, 2nd time with panic)

Correct the network IDs and the panic no longer occurs.

Important Factoids

This was bad code (I forgot to pass the correct network ID list) but it should have been caught rather than cause a panic :) .

References

  • #0000
@ghost
Copy link

ghost commented Jan 27, 2020

This has been released in version 1.42.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 1.42.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants