-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pillars not updated on minions until salt-minion is restarted #31907
Comments
@aabognah, thanks for reporting. What happens when you do |
|
I updated to the new 2015.8.8.2 version but the problem still exists. I have another setup with another master (running on ERHEL6) and I don't see the problem on that one. Here is the versions report of the WORKING master (where minions DO NOT need to be restarted for pillars updates to show up):
And here is the versions report of the NON-WORKING master (where the minions NEED TO BE RESTARTED after the pillars are updated for changes to take effect):
The minions for both masters looks similar and are all RHEL6/7 or OEL. here is a versions report of one minion:
|
@jfindlay is there a workaround that I can implement to fix this? |
@aabognah, not that I know of. |
Hmm, my first reaction here is that this might be be related to the difference in git provider libs. If you take pygit2 down to the version on the working master, does this problem go away? |
Does the fact that I have two masters setup in redundant-master setup have anything to do with this? setup was made base on this walk through: The two masters have the same key, minions are configured to check-in with both masters, and both masters look at the same repository for gitfs and git_pillar. Do I need to keep the local cache files on each master in sync in order to solve this? |
Same here - here is our minion:
|
We're struggling with this issue and we're running a multimaster topology with Changes in pillars are not updated in pillar.get and .item calls when you target minions from one of the masters after you've executed There seems to be little activity on this and the related issues linked here. Is it something that's being worked on or are there workarounds we can use, perhaps a different multimaster topology? |
I can confirm this issue is still occurring in 2016.3.8:
The only workaround even in a simple master/minion setup is to restart the salt-minion. Neither of the associated issues have been addressed. |
This issue is still occuring on 2017.7.2
|
Still reproducible on 2018.3
Is it going to be addressed soon? |
I'm hitting the same issue in 2018.3.2. |
Same here with 2018.3.2 |
I ran into something similar, but a restart of the minion didn't help. My problem was with how we use packer and the salt-masterless provisioner. The provisioner copies pillar and salt files under /srv due to The fix was to do a clean up ( https://www.packer.io/docs/provisioners/salt-masterless.html |
One interesting thing: |
Seeing similarly strange things in 2019.2.0 where, having updated a plain YAML/SLS-style pillar file on the master (and tried restarting master, minions, A bit more debugging running commands on a minion:
The pillar that you see when I pass I was expecting to have stumbled onto a weird edge case of my own making, but I'm instead surprised by how long this has been a problem for other users. How can we help you get more information to fix this, because it's fundamental to why people use Salt — repeatability. Right now, Salt can literally deploy the wrong things on the wrong hosts. |
I found that I could workaround my problems by doing this on the master:
|
I needed to do the above, but also renamed a cache directory under |
I'm having the same problem. I just noticed that one of my (Windows) minions is not refreshing its in memory pillar data after a saltutil.refresh_pillar. I'm on salt 3000 on my minions and salt 3001 on the master. I don't see what setting pillar_cache: False on the master would do since that supposed to be the default but I'm trying it anyway. I have done that and I have deleted all of the directories in /var/cache/salt/master/minions just to see what happens. I also notice that pillar based scheduling stops doing anything on these minions once the refresh stops working. It my particular case there could be some kind of timeout issue lurking in the background. I schedule a saltutil.refresh_pillar, but in the scheduling specification I don't see how to include a timeout value. If the salt master is not available to the minion at the time of the function is called it's possible that the the job never returns. which may be the cause of what I'm seeing (somehow). Sorry for this stream of conciousness babble. I'm trying to understand what's going on. WHat I said about refresh_pilar makes no sense since that just causes a signal to be sent. I am seeing this happening on machines that I believe are suspending (usually laptops) and then waking up. I notice that for the pillar events since the pillar is apparently frozen schedule.next_fire_time for all of the events specified in the pillar also becomes frozen, and all the times become times in the past. |
Alright, I apologise for the previous post. I wasn't really ready to say anything but I am now (sort of). Some things I have observed:
That's all I've got. I have no idea how I could possibly triangulate this. I hope that this can be looked at because I consider it to be a quite serious problem with core functionality. If it is due to network disruptions and cannot be fixed (for instance due to however zeromq is implemented), then the FAQ should have workarounds for that situation. On Windows machines I believe I can have the scheduler restart minions after waking up (which I will try next I think). This may be an adequate workadound, if not ideal (fingers crossed). |
This seems to be 90-100% resolved for Windows minions by having the salt-minion service restart after waking up from sleep. I don't know what the situation is for linux minions. I now have much more reliability with minions (specifically the laptops) reporting in regularly and actually carrying out their scheduled events. |
I am new to Salt and was following an older tutorial when I ran into the same issue. It seems that the expected folder structure changed. The tutorial said that I should store both my state and pillar data in the ##### Pillar settings #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
#pillar_roots:
# base:
# - /srv/pillar Once I moved my files to the correct folders everything started to work :) EDIT: Another beginner problem I ran into - when creating sub-folders in |
I can make this happen with 3002.5 master and minions (not multi-master). It is random and hard to reproduce, but when it happens the following things do not help:
Eventually, the problem resolves itself. Calling pillar.items repeatedly might help resolve it, but its hard to say definitively. Certainly calling it once doesn't always fix the problem, but eventually it does. |
I am closing this as |
Description of Issue/Question
When using git_pilalr, if I make a change to the pillar data on the repo and run:
salt '*' saltutil.refresh_pillar
The pillar data is not updated. only if the minion is restarted that new pillars show up when I run
salt '*' pillar.item pillar_name
the git_pillar config file in /etc/salt/master.d/pillar_config.conf:
Steps to reproduce:
I am not sure how this will be reproduced. I am using the same repo for gitfs and git_pillar and all hosts are RHEL6/7 on a virtual environment (vMWare)
Versions Report
The text was updated successfully, but these errors were encountered: