Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vsan_disk_group in r/vsphere_compute_cluster should be TypeSet and not TypeList #1205

Closed
adarobin opened this issue Sep 8, 2020 · 12 comments
Assignees
Labels
acknowledged Status: Issue or Pull Request Acknowledged area/clustering Area: Clustering bug Type: Bug size/s Relative Sizing: Small
Milestone

Comments

@adarobin
Copy link
Contributor

adarobin commented Sep 8, 2020

Terraform Version

0.13.1

vSphere Provider Version

1.23.0

Affected Resource(s)

  • vsphere_compute_cluster

Terraform Configuration Files

resource "vsphere_compute_cluster" "cluster" {
  provider = vsphere.nested

  name          = var.nested_cluster_name
  datacenter_id = vsphere_datacenter.datacenter.moid

  host_system_ids = [ for k,v in var.esxi_hostname_ip_map : vsphere_host.nested_esxi[k].id ]

  drs_enabled                 = true
  drs_automation_level        = "fullyAutomated"
  ha_enabled                  = true
  ha_admission_control_policy = "disabled"
  force_evacuate_on_destroy   = true
  
  vsan_enabled = true

  dynamic "vsan_disk_group" {
    for_each = var.esxi_hostname_ip_map
    iterator = host

    content {
      cache   = var.vsan_cache_disks[host.key][0]
      storage = var.vsan_capacity_disks[host.key]
    }
  }
}

Debug Output

Panic Output

Expected Behavior

terraform apply is successful. No changes are needed after an initial apply.

Actual Behavior

terraform apply is successful. The vSAN is created, but future terraform plan runs show changes needed due to the order of elements in vsan_disk_group.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.nested-esxi-lab-inner.vsphere_compute_cluster.cluster will be updated in-place
  ~ resource "vsphere_compute_cluster" "cluster" {
        custom_attributes                                     = {}
        ** snipped a bunch of lines here for clarity **
        vsan_enabled                                          = true

      ~ vsan_disk_group {
          ~ cache   = "naa.6000c29f402bc1b9b4bff84f5f4c5bc4" -> "naa.6000c2964873546c663d4fa6e64faa2e"
          ~ storage = [
              + "naa.6000c2906103371d4b758bc6557aa2e0",
              - "naa.6000c2929de009565ef1efdb3077ee9a",
            ]
        }
        vsan_disk_group {
            cache   = "naa.6000c29758b6ba406b8ebfb672056810"
            storage = [
                "naa.6000c29128e01ac02b774e94f953555a",
            ]
        }
      ~ vsan_disk_group {
          ~ cache   = "naa.6000c2964873546c663d4fa6e64faa2e" -> "naa.6000c29f402bc1b9b4bff84f5f4c5bc4"
          ~ storage = [
              - "naa.6000c2906103371d4b758bc6557aa2e0",
              + "naa.6000c2929de009565ef1efdb3077ee9a",
            ]
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  1. terraform apply
  2. terraform plan

Important Factoids

This is a nested ESXi test environment.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@adarobin adarobin added the bug Type: Bug label Sep 8, 2020
@bill-rich
Copy link
Contributor

You are right on this. I'll get on fixing this and adding an acceptance test for it also.

@bill-rich bill-rich added acknowledged Status: Issue or Pull Request Acknowledged size/s Relative Sizing: Small labels Sep 9, 2020
@adarobin
Copy link
Contributor Author

I've updated my lab vCenter to 7.0 Update 2 and I'm seeing a crash now with the future applies after the first one.

Terraform 0.14.8
vSphere Provider 1.25.0

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.nested-inner.vsphere_compute_cluster.nested_cluster will be updated in-place
  ~ resource "vsphere_compute_cluster" "nested_cluster" {
        id                                                    = "domain-c46"
        name                                                  = "Nested_Cluster"
        tags                                                  = []
        # (48 unchanged attributes hidden)

      ~ vsan_disk_group {
          ~ cache   = "naa.6000c2953eb428fcfa9239aff69dace8" -> "naa.6000c2924a0609e967d74902511c18c5"
          ~ storage = [
              + "naa.6000c29573c68d0cad85313db6cafe4c",
              - "naa.6000c29574a4795c6988c391cbce82b8",
            ]
        }
      ~ vsan_disk_group {
          ~ cache   = "naa.6000c29c4b780eaf49ea9e674527e7e1" -> "naa.6000c2953eb428fcfa9239aff69dace8"
          ~ storage = [
              - "naa.6000c290c67ee8eb6d35254a31ab21f4",
              + "naa.6000c29574a4795c6988c391cbce82b8",
            ]
        }
      ~ vsan_disk_group {
          ~ cache   = "naa.6000c2924a0609e967d74902511c18c5" -> "naa.6000c29c4b780eaf49ea9e674527e7e1"
          ~ storage = [
              + "naa.6000c290c67ee8eb6d35254a31ab21f4",
              - "naa.6000c29573c68d0cad85313db6cafe4c",
            ]
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.nested-inner.vsphere_compute_cluster.nested_cluster: Modifying... [id=domain-c46]

Error: rpc error: code = Unavailable desc = transport is closing


panic: runtime error: slice bounds out of range [:1] with capacity 0
2021-03-23T16:00:27.002-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:
2021-03-23T16:00:27.002-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: goroutine 28 [running]:
2021-03-23T16:00:27.006-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-provider-vsphere/vsphere/internal/helper/structure.DropSliceItem(...)
2021-03-23T16:00:27.006-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vsphere/internal/helper/structure/structure_helper.go:649
2021-03-23T16:00:27.006-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-provider-vsphere/vsphere.updateVsanDisks(0xc000217420, 0xc0002f8100, 0x130ac20, 0xc000080a40, 0x0, 0x0)
2021-03-23T16:00:27.006-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:1282 +0x17ce
2021-03-23T16:00:27.006-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-provider-vsphere/vsphere.resourceVSphereComputeClusterUpdate(0xc000217420, 0x130ac20, 0xc000080a40, 0x24, 0x216dfe0)
2021-03-23T16:00:27.006-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vsphere/resource_vsphere_compute_cluster.go:609 +0x2be
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).Apply(0xc00009a180, 0xc000144d70, 0xc00014f000, 0x130ac20, 0xc000080a40, 0x133b001, 0xc00046bd58, 0xc000396150)
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/resource.go:311 +0x273
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).Apply(0xc00022f000, 0xc0005cf8e8, 0xc000144d70, 0xc00014f000, 0xc00034df78, 0xc000286d01, 0x133ce20)
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/provider.go:294 +0x99
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc00012e6b8, 0x184db90, 0xc000413140, 0xc0008c9440, 0xc00012e6b8, 0xc000413140, 0xc000888a50)
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin/grpc_provider.go:885 +0x8a5
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x1585f00, 0xc00012e6b8, 0x184db90, 0xc000413140, 0xc0008c93e0, 0x0, 0x184db90, 0xc000413140, 0xc0002e6800, 0x1784)
2021-03-23T16:00:27.009-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5/tfplugin5.pb.go:3189 +0x214
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: google.golang.org/grpc.(*Server).processUnaryRPC(0xc00015e160, 0x18566b8, 0xc000190a80, 0xc000238200, 0xc00021ecc0, 0x212f0a0, 0x0, 0x0, 0x0)
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/google.golang.org/grpc/server.go:995 +0x482
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: google.golang.org/grpc.(*Server).handleStream(0xc00015e160, 0x18566b8, 0xc000190a80, 0xc000238200, 0x0)
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/google.golang.org/grpc/server.go:1275 +0xd2c
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0001821d0, 0xc00015e160, 0x18566b8, 0xc000190a80, 0xc000238200)
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/google.golang.org/grpc/server.go:710 +0xab
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4: created by google.golang.org/grpc.(*Server).serveStreams.func1
2021-03-23T16:00:27.012-0400 [DEBUG] plugin.terraform-provider-vsphere_v1.25.0_x4:      /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-vsphere/vendor/google.golang.org/grpc/server.go:708 +0xa5
2021/03/23 16:00:27 [DEBUG] module.nested-inner.vsphere_compute_cluster.nested_cluster: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021/03/23 16:00:27 [TRACE] EvalWriteState: recording 22 dependencies for module.nested-inner.vsphere_compute_cluster.nested_cluster
2021/03/23 16:00:27 [TRACE] EvalWriteState: writing current state object for module.nested-inner.vsphere_compute_cluster.nested_cluster
2021/03/23 16:00:27 [TRACE] EvalApplyProvisioners: vsphere_compute_cluster.nested_cluster is not freshly-created, so no provisioning is required
2021/03/23 16:00:27 [TRACE] EvalWriteState: recording 22 dependencies for module.nested-inner.vsphere_compute_cluster.nested_cluster
2021/03/23 16:00:27 [TRACE] EvalWriteState: writing current state object for module.nested-inner.vsphere_compute_cluster.nested_cluster
2021/03/23 16:00:27 [TRACE] vertex "module.nested-inner.vsphere_compute_cluster.nested_cluster": visit complete
2021/03/23 16:00:27 [TRACE] dag/walk: upstream of "module.nested-inner (close)" errored, so skipping
2021/03/23 16:00:27 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2021/03/23 16:00:27 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/vsphere\"].nested (close)" errored, so skipping
2021/03/23 16:00:27 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021/03/23 16:00:27 [TRACE] statemgr.Filesystem: creating backup snapshot at terraform.tfstate.backup
2021-03-23T16:00:27.020-0400 [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/vsphere/1.25.0/linux_amd64/terraform-provider-vsphere_v1.25.0_x4 pid=829 error="exit status 2"
2021/03/23 16:00:27 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 682
2021/03/23 16:00:27 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2021/03/23 16:00:27 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2021/03/23 16:00:27 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2021-03-23T16:00:27.064-0400 [DEBUG] plugin: plugin exited
2021-03-23T16:00:27.064-0400 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-23T16:00:27.065-0400 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-23T16:00:27.065-0400 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-23T16:00:27.065-0400 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-23T16:00:27.067-0400 [DEBUG] plugin: plugin process exited: path=/usr/bin/terraform pid=546
2021-03-23T16:00:27.067-0400 [DEBUG] plugin: plugin exited
2021-03-23T16:00:27.069-0400 [DEBUG] plugin: plugin process exited: path=/usr/bin/terraform pid=571
2021-03-23T16:00:27.069-0400 [DEBUG] plugin: plugin exited
2021-03-23T16:00:27.070-0400 [DEBUG] plugin: plugin process exited: path=/usr/bin/terraform pid=643
2021-03-23T16:00:27.070-0400 [DEBUG] plugin: plugin exited
2021-03-23T16:00:27.071-0400 [DEBUG] plugin: plugin process exited: path=/usr/bin/terraform pid=668
2021-03-23T16:00:27.071-0400 [DEBUG] plugin: plugin exited

@waquidvp
Copy link
Contributor

This was fixed with #1432. Would be nice if @adarobin could try again with the fix.

@tenthirtyam
Copy link
Collaborator

Fixed in #1432.

Recommend: close/fix-committed

@tenthirtyam
Copy link
Collaborator

cc @appilon to close.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 22, 2022
@appilon
Copy link
Contributor

appilon commented Mar 8, 2022

As mentioned in #1432 this is what we consider a breaking change and we will fix this correctly in v3.0.0 of the provider.

@appilon appilon reopened this Mar 8, 2022
@appilon appilon added the breaking-change Status: Breaking Change label Mar 8, 2022
@appilon appilon added this to the v3.0.0 milestone Mar 8, 2022
@tenthirtyam tenthirtyam added the area/clustering Area: Clustering label Mar 8, 2022
@hashicorp hashicorp unlocked this conversation Mar 8, 2022
@adarobin
Copy link
Contributor Author

@appilon just curious for future reference/building my own knowledge why this is considered a breaking change?

@appilon
Copy link
Contributor

appilon commented Mar 10, 2022

@adarobin It's a very subtle problem and often up for debate, but mainly users who previously relied on the fact it was TypeList COULD use syntax as follows:

resource "thing" "ex" {
    name = something.typelist_attr.0.name
}

Lists are indexed by a number. Sets on the other hand are indexed by a hash value that is calculated by Terraform (based on the data of the element in the set)

resource "thing" "ex" {
    name = something.typeset_attr.876543.name
}

Users with the original list based config would now see an error along the lines of no item in typelist_attr with key 0 or something like that. Figuring out the hash value is non trivial (as the statefile saves the data as a json array even for TypeSet, the hash needs to be manually calculated to figure it out), so users would have no practical way forward.

Conversely, we are relatively confident users almost never reference set items individually by the hash index, so we do consider TypeSet --> TypeList a non-breaking change.

@tenthirtyam tenthirtyam changed the title vsan_disk_group should be TypeSet and not TypeList vsan_disk_group in r/vsphere_compute_cluster should be TypeSet and not TypeList Jul 25, 2022
@tenthirtyam tenthirtyam mentioned this issue Feb 1, 2023
2 tasks
@tenthirtyam tenthirtyam modified the milestones: v3.0.0, v2.3.0 Feb 2, 2023
@tenthirtyam tenthirtyam removed the breaking-change Status: Breaking Change label Feb 2, 2023
@tenthirtyam
Copy link
Collaborator

Resolved in GH-1820.

@github-actions
Copy link

github-actions bot commented Feb 9, 2023

This functionality has been released in v2.3.0 of the Terraform Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
acknowledged Status: Issue or Pull Request Acknowledged area/clustering Area: Clustering bug Type: Bug size/s Relative Sizing: Small
Projects
None yet
Development

No branches or pull requests

6 participants