-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix scaling #5889
Fix scaling #5889
Conversation
Hi @champtar. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Update: I've also fixed master scaling |
0609c68
to
10a8b51
Compare
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
We want to wait for the full cluster to be healthy, so use all the cluster addresses Also we should be able to run the playbook when etcd[0] is down (not tested), so do not delegate to etcd[0] Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
unhealthy cluster is expected on first run, so use failed_when instead of ignore_errors to remove scary red messages Also use run_once Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
…ng /etc/hosts Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
Can someone add |
/assign @Miouge1 |
I tested this locally works nicely. Thank you @champtar /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: champtar, Miouge1 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* etcd: etcd-events doesn't depend on etcd_cluster_setup Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: remove condition already present on include_tasks Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: fix scaling up Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: use *access_addresses, do not delegate to etcd[0] We want to wait for the full cluster to be healthy, so use all the cluster addresses Also we should be able to run the playbook when etcd[0] is down (not tested), so do not delegate to etcd[0] Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: use failed_when for health check unhealthy cluster is expected on first run, so use failed_when instead of ignore_errors to remove scary red messages Also use run_once Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * kubernetes/preinstall: ensure ansible_fqdn is up to date after changing /etc/hosts Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * kubernetes/master: regenerate apiserver cert if needed Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> (cherry picked from commit a35b6dc)
* 'master' of https://github.com/kubernetes-sigs/kubespray: (21 commits) Remove hard-coded dependance to docker.service in kubelet.service file (kubernetes-sigs#5917) Update Calico to v3.13.2, Multus to v3.4.1. Add ConfigMap get permission to allow calico-node access to kubeadm config. (kubernetes-sigs#5912) Fix idempotence issue in bootstrap-os (kubernetes-sigs#5916) Terraform/OpenStack: Fix idempotency bug in module.network.openstack_networking_router_interface_v2.k8s[0] (kubernetes-sigs#5914) Add kubernetes 1.18.1 hashes (kubernetes-sigs#5915) Proxy fixes (kubernetes-sigs#5869) Remove 1.16.x flag for tf-ovh_coreos-calico (now 1.17 ready) (kubernetes-sigs#5853) Update docker RHEL/CentOS versions to the latest patch versions available. (kubernetes-sigs#5872) Fix conntrack for opensuse and docker support (kubernetes-sigs#5880) Add crictl 1.18.0 hashes for k8s 1.18 (kubernetes-sigs#5877) fix readonly flexvolume in fcos and coreos (kubernetes-sigs#5885) Fix scaling (kubernetes-sigs#5889) Fix chicken and egg problem with proxy_env not defined on the first … (kubernetes-sigs#5896) make explicit that doc is at kubespray.io (kubernetes-sigs#5878) add local-path-provosioner helper image def (kubernetes-sigs#5817) remove unused kubelet options (kubernetes-sigs#5903) Change docker.io repo to variable and upgrade alb image (kubernetes-sigs#5898) Replace latest tags for csi drivers (kubernetes-sigs#5899) CentOS 8 CI (kubernetes-sigs#5842) Bump requirements.txt versions / remove ansible_python_interpreter hack (kubernetes-sigs#5847) ...
* etcd: etcd-events doesn't depend on etcd_cluster_setup Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: remove condition already present on include_tasks Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: fix scaling up Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: use *access_addresses, do not delegate to etcd[0] We want to wait for the full cluster to be healthy, so use all the cluster addresses Also we should be able to run the playbook when etcd[0] is down (not tested), so do not delegate to etcd[0] Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: use failed_when for health check unhealthy cluster is expected on first run, so use failed_when instead of ignore_errors to remove scary red messages Also use run_once Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * kubernetes/preinstall: ensure ansible_fqdn is up to date after changing /etc/hosts Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * kubernetes/master: regenerate apiserver cert if needed Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
* etcd: etcd-events doesn't depend on etcd_cluster_setup Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: remove condition already present on include_tasks Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: fix scaling up Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: use *access_addresses, do not delegate to etcd[0] We want to wait for the full cluster to be healthy, so use all the cluster addresses Also we should be able to run the playbook when etcd[0] is down (not tested), so do not delegate to etcd[0] Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * etcd: use failed_when for health check unhealthy cluster is expected on first run, so use failed_when instead of ignore_errors to remove scary red messages Also use run_once Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * kubernetes/preinstall: ensure ansible_fqdn is up to date after changing /etc/hosts Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> * kubernetes/master: regenerate apiserver cert if needed Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com> (cherry picked from commit a35b6dc)
What type of PR is this?
/kind bug
What this PR does / why we need it:
Allow to scale from 1 nodes to 4 nodes (2 master 3 etcd all workers) by just running
cluster.yml
Which issue(s) this PR fixes:
NONE
Special notes for your reviewer:
See commit messages
Does this PR introduce a user-facing change?: