-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] AzureRM - Azure Postgres Flexible Server - Virtual Endpoint Attempts to re-create after Failover #27796
Comments
Also - for what it's worth... I've attempted to add lifecycle prevent destroy and it does not work. The only workaround I found is: Create a var: variable "create_virtual_endpoint" { Use var as a bool to create or not. But on initial creation it would need set to true. It would need changed to "False" in a separate PR after. I'm trying to reduce the amount of steps required. resource "azurerm_postgresql_flexible_server_virtual_endpoint" "testendpoint" { depends_on = [ lifecycle { |
Thanks for raising this issue. The prevent destroy needs to be added from the beginning. Seems I can't reproduce this issue. Could you double check if below reproduce steps are expected? Reproduce steps:
tf config:
|
Apologies you're using "ZoneRedundant" for the HA mode It's actually Replica |
According to the plan that you shared, is just creating a virtual endpoint (does not include the destruction part), which may suggest that the Virtuaal Endpoint was already destroyed on the Failover. Could be that the case? Is not uncommon that in failover scenarios, due to the changes made in the process, the Terraform code gets outdated. In those situations, you will need to decide between to restore the original configuration once the situation that triggered the failover is no longer valid, or update the code to properly describe the new status. |
Hey CorrenSoft thanks for the reply. It actually does NOT destroy the endpoint. I have ton some extremely extensive tests on this to replicate it. I can replicate this super easily. If possible, would you be willing to hop on a call with me? No pressure or anything. That way I can show you this. My company is a fortune 500 but we aren't a Terraform enterprise customer. (Although we spend a large amount with Hashi :-D ) Thanks in advance |
Not sure if it would be appropriate since I don't work for Hashicorp :p Just to increase the context information, Did you say that the failover did not destroy the endpoint? If so, does the apply step actually create a new one? |
Oh I apologize, I thought you did! lol. Yes. The failover did NOT destroy the endpoint. Which is expected. The database servers should be able to fail between each other without destruction My concern is that the Terraform doesn't see the virtual endpoint when it goes to check state. Even though it already exists. It's 100% a bug on Hashi's end. There was another bug related to this that I was able to get someone to fix, but that person no longer works at hashi |
Anyone from hashi take a peek at this yet? |
I ran into this problem too, after promoting to the replica server the terraform don't "know" the endpoint and tries to create a new one (ends with error because same name...) after promoting again(to the original one) its worked. |
@jackofallops could you take a look? |
@stephybun could you take a look? |
Unsure if anyone is planning on trying to fix this so i gave it a shot here |
Is there an existing issue for this?
Community Note
Terraform Version
0.13
AzureRM Provider Version
4.7.0
Affected Resource(s)/Data Source(s)
azurerm_postgresql_flexible_server_virtual_endpoint
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
It should see there is already an endpoint assigned to both resources that is functional.
Actual Behaviour
No response
Steps to Reproduce
Whitespace change
Important Factoids
No response
References
No response
The text was updated successfully, but these errors were encountered: