Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bootstrap HA #2002

Open
thomasdanan opened this issue Nov 5, 2019 · 3 comments
Open

Bootstrap HA #2002

thomasdanan opened this issue Nov 5, 2019 · 3 comments
Labels
kind:epic High-level description of a feature iteration topic:deployment Bugs in or enhancements to deployment stages

Comments

@thomasdanan
Copy link
Contributor

thomasdanan commented Nov 5, 2019

We want to have high availability of the bootstrap node as it is a single access point for all operations (through salt-master) and because this is from where the containers images are served + CA may be part of the bootstrap node as well. An Active/Passive approach (especially for salt master) is probably acceptable

@gdemonet gdemonet added kind:epic High-level description of a feature iteration topic:deployment Bugs in or enhancements to deployment stages labels Nov 5, 2019
@TeddyAndrieux
Copy link
Collaborator

POC Registry HA (it's only a part of "Bootstrap HA"):

1.Copy all ISOs on another host at the same place as on the bootstrap one (to match what is in the bootstrap config)
2. Mount ISO(s) salt-call state.sls metalk8s.archives.mounted saltenv=<saltenv>
3. Deploy repository salt-call state.sls metalk8s.repo.installed saltenv=<saltenv>
4. Reconfigure ALL containerd to have both registry endpoints and restart them (note: to do it with the salt state it needs some changes in the code to support "multiple endpoints" for the repository)

Containerd will automatically try both endpoints to pull images so "registry HA"

@NicolasT
Copy link
Contributor

Containerd will automatically try both endpoints to pull images so "registry HA"

What happens if both registries are up, but one doesn't have all ISOs (yet), and 404s when containerd requests an image/layer? Will it try on the other addresses as well?

@TeddyAndrieux
Copy link
Collaborator

Right, we discuss about it this morning (during our standup) and yes it works

First repo (2.10 ISOs not here yet):

2021-05-21T12:52:56.084559995Z stdout F 10.100.6.146 - - [21/May/2021:12:52:56 +0000] "HEAD /v2/metalk8s-2.10.0-dev/kube-apiserver/manifests/v1.21.0?ns=metalk8s-registry-from-config.invalid HTTP/1.1" 404 0 "-" "containerd/1.4.3" "-"

Second repo (with the ISO mounted/configured):

2021-05-21T12:52:56.082589301Z stdout F 10.100.6.146 - - [21/May/2021:12:52:56 +0000] "HEAD /v2/metalk8s-2.10.0-dev/kube-apiserver/manifests/v1.21.0?ns=metalk8s-registry-from-config.invalid HTTP/1.1" 200 0 "-" "containerd/1.4.3" "-"

TeddyAndrieux added a commit that referenced this issue May 28, 2021
TeddyAndrieux added a commit that referenced this issue May 28, 2021
TeddyAndrieux added a commit that referenced this issue May 28, 2021
TeddyAndrieux added a commit that referenced this issue May 28, 2021
TeddyAndrieux added a commit that referenced this issue May 28, 2021
TeddyAndrieux added a commit that referenced this issue May 28, 2021
TeddyAndrieux added a commit that referenced this issue Jun 2, 2021
TeddyAndrieux added a commit that referenced this issue Jun 2, 2021
TeddyAndrieux added a commit that referenced this issue Jun 2, 2021
TeddyAndrieux added a commit that referenced this issue Jun 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:epic High-level description of a feature iteration topic:deployment Bugs in or enhancements to deployment stages
Projects
None yet
Development

No branches or pull requests

4 participants