-
Notifications
You must be signed in to change notification settings - Fork 465
Conversation
Etiene
commented
Oct 11, 2018
- Adds test for vault auto-unseal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good!
@@ -24,6 +24,7 @@ import ( | |||
) | |||
|
|||
const REPO_ROOT = "../" | |||
const AUTO_UNSEAL_KMS_KEY_ALIAS = "dedicated-test-key" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment explaining what this is, both so we know what this is, and open source community folks know to update this.
test/vault_helpers.go
Outdated
|
||
assertStatus(t, cluster.Standby1, Sealed) | ||
restartVault(t, cluster.Standby1) | ||
assertStatus(t, cluster.Standby1, Standby) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I understand correctly... When Vault boots the very first time, it is not initialized and therefore sealed. Then you run init
... And that unseals the leader... But you have to reboot the other nodes to unseal them? There's no way for those follower nodes to auto unseal after an init?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct. So I went ahead and manually tested and when follower nodes join the cluster afterwards they are unsealed. If they had joined before init, they are sealed and they have to rejoin like the in the test above. Would it be possible to launch just the leader, init, and then add the standby nodes? Such as adding a delay between leader and others? Or creating a cluster of 1, init, then increase the size of the asg group?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I tested the later option, doing the init, then increasing the size of the cluster size and terraform applying again and that works like a charm. New nodes boot unsealed. I guess we could just recommend that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neat idea! I'm not sure what the official recommendation is, but scaling up the cluster seems like one reasonable way of doing it.
The other option is that you identify a "rally point" node that will initialize the cluster. All other nodes will ping the rally point and wait for it to boot and run init
before booting their own nodes and auto-unsealing. This might be a nicer user experience. Here's how we identify a rally point in the couchbase code: https://github.com/gruntwork-io/terraform-aws-couchbase/blob/master/modules/couchbase-commons/couchbase-common.sh#L263
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right, as you pointed out on Slack, the rally point approach still means running init
automatically on the server, which means there's no easy way to save the root token. OK, starting with a cluster of size one, running init
manually, saving the root token, and then scaling up to 3 seems like a reasonable approach. As long as we clearly document that, I'm good 👍
Notify: |