-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Salt-Master Public IP Change #61482
Comments
Hi there! Welcome to the Salt Community! Thank you for making your first contribution. We have a lengthy process for issues and PRs. Someone from the Core Team will follow up as soon as possible. In the meantime, here’s some information that may help as you continue your Salt journey.
There are lots of ways to get involved in our community. Every month, there are around a dozen opportunities to meet with other contributors and the Salt Core team and collaborate in real time. The best way to keep track is by subscribing to the Salt Community Events Calendar. |
Could be seeing a new version of the occurrence of dns caching not getting updated with glibc, see #21397. Setting up test environment to duplicate the issue |
Confirmed this occurring in a test environment with Master, Minion and separate DNS server on another VM.
But after the master's ip address change, minion is not seen
If the minion is restarted, then it picks up the new master IP Address and communicates fine. Sample section of minion's configuration file
|
It appears that this has been broken since Salt 2015.5 (Py 2.7 tested on Centos 7). |
Problem is in-correct documentation for the minion's configuration file, typically /etc/salt/minion Lines 265 to 280 in 2d29d45
The correct setting for auth_safemode is True, the file and documentation will be updated, that is
In order for the salt-minion to recognize Master IP changes, the minions configuration's file will need at least the following entries: ping_interval: |
Note the documentation mistake is from 55e38a9 |
Fix in PR #61577 |
Check for a chainging dns record anytime a minion gets disconnected from it's master. See github issue saltstack#63654 saltstack#61482.
* Minions check dns when re-connecting to a master Check for a chainging dns record anytime a minion gets disconnected from it's master. See github issue saltstack#63654 saltstack#61482. * Regression tests for dns defined masters Adding tests to validate we check for changing dns anytime we're disconnected from the currently connected master * Update docs for master dns changes Update docs to use master_alive_interval to detect master ip changes via DNS. * Remove comment which is not true anymore * Make minion reconnecting on changing master IP with zeromq transport * Don't create schedule for alive if no master_alive_interval * Skip the tests if running with non-root user * Skip if unable to set additional IP address * Set master_tries to -1 for minions * Fix the tests --------- Co-authored-by: Daniel A. Wozniak <daniel.wozniak@broadcom.com>
* Minions check dns when re-connecting to a master Check for a chainging dns record anytime a minion gets disconnected from it's master. See github issue saltstack#63654 saltstack#61482. * Regression tests for dns defined masters Adding tests to validate we check for changing dns anytime we're disconnected from the currently connected master * Update docs for master dns changes Update docs to use master_alive_interval to detect master ip changes via DNS. * Remove comment which is not true anymore * Make minion reconnecting on changing master IP with zeromq transport * Don't create schedule for alive if no master_alive_interval * Skip the tests if running with non-root user * Skip if unable to set additional IP address * Set master_tries to -1 for minions * Fix the tests --------- Co-authored-by: Daniel A. Wozniak <daniel.wozniak@broadcom.com> BACKPORT-UPSTREAM=saltstack#66422 BACKPORT-UPSTREAM=saltstack#66757 BACKPORT-UPSTREAM=saltstack#66760
Description
A clear and concise description of what the bug is.
Given:
Salt-master behind a firewall with Dynamic IP address. Ports 4505 and 4506 are forwarded to the Salt Stack VM (CentOS 8).
Salt-minions that point to the FQDN salt.someprovider.com.
When the public IP of the firewall changes, changes the DNS entry. After that change, the DNS on the salt-minions gets updated automatically. Even after the update, the minion does not communicate to the master until the service is restarted.
Setup
(Please provide relevant configs and/or SLS files (be sure to remove sensitive info. There is no general set-up of Salt.)
Please be as specific as possible and give set-up details.
for salt-minions it's VMs in ESXi 6.7 (CentOS 7)
Steps to Reproduce the behavior
(Include debug logs if possible and relevant)
Change the public IP of the Salt Mater and try running any command.
Expected behavior
A clear and concise description of what you expected to happen.
Expectation is that after the DNS entry gets updated and propagated to all minions to restore "communication" to the master.
Screenshots
If applicable, add screenshots to help explain your problem.
Example:
Versions Report
salt --versions-report
(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: