Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apstra_datacenter_device_allocation - device profile issue #625

Closed
pyspiotr opened this issue Mar 27, 2024 · 6 comments · Fixed by #631
Closed

apstra_datacenter_device_allocation - device profile issue #625

pyspiotr opened this issue Mar 27, 2024 · 6 comments · Fixed by #631

Comments

@pyspiotr
Copy link

Hello,
I got an error once using apstra_datacenter_device_allocation resoruce in Apstra 4.2.1.
Attached files consist of the output I got after apply.
The error does not show up if the device_key is not specified.
Regards,
Piotr
graph_query
terraform_outpur

@chrismarget-j
Copy link
Collaborator

Hi @pyspiotr,

Thank you for opening this issue.

Would you mind sharing a sample of the terraform configuration which was being applied in this case?

@pyspiotr
Copy link
Author

terraform_apstra_lab.zip
Hi Chris,
Here is the LAB setup in which you may use to replicate the issue. There are 2 steps included:

  1. Initial phase - we build a blueprint with dummy RACK and resources.
  2. Second phase - extension of the existing blueprint - new RACKs, Leafs ....
    If we you uncomment the "#device_key = each.value.device_key" in blueprint.tf you will see the error described ealier.
    Regards,
    Piotr

@chrismarget-j
Copy link
Collaborator

chrismarget-j commented Mar 29, 2024

Hi @pyspiotr,

Having two graph nodes with the same device_profile_id is a surprise for us. We've got some theories about what might be going on, but would like to take a closer look at the state of your system.

Would you please open a case with Juniper/Apstra TAC and share the "show tech" and a backup from your environment so we can investigate further? The TAC folks can walk you through it.

Thanks!

@chrismarget-j
Copy link
Collaborator

Hi @pyspiotr,

We've managed to reproduce the condition in a test environment. We may not need the data from your system.

I'll keep you posted.

@chrismarget-j
Copy link
Collaborator

Update: I'll summarize my current understanding of the situation here.

  • Having two identical Device Profile nodes (clones) in Apstra's graph DB is an unexpected condition.
  • The presence of clones does not present any operational risk to an Apstra managed fabric.
  • Previous versions of Apstra had issues when upgrading Blueprints which included clones.
  • Apstra support can help with a process to eliminate them.
  • Current Apstra releases may introduce clones when a rack is imported into an existing blueprint under the following conditions:
    • The rack includes a Logical Device which does not currently exist in the Blueprint.
    • An interface Map which references that Logical Device is available in the Catalog
    • That Interface Map does not appear in the Blueprint
    • That interface Map references a Device Profile which does appear in the Blueprint.
  • A fix to the rack import logic will be incorporated into a future Apstra release so that clones will no longer be created under these circumstances.
  • The terraform provider naively assumes that clones will not exist in the graph DB.
  • We'll update the terraform provider so that clones are handled seamlessly.

You may be able to work around the problem by including an example of each Logical Device in the racks which comprise the Template from which the Blueprint is initially created (so that no rack import introduces a new LD), or by proactively importing the required Interface Map prior to import of the new racks.

We'll update this issue when the terraform provider includes a fix for seamless handling of the clones.

@pyspiotr
Copy link
Author

pyspiotr commented Apr 8, 2024

Works perfect! Thx a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants