-
Notifications
You must be signed in to change notification settings - Fork 857
Fix RedHat install - until condition was not correct #423
Fix RedHat install - until condition was not correct #423
Conversation
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
jenkins test this please |
tasks/elasticsearch-RedHat.yml
Outdated
@@ -19,7 +19,7 @@ | |||
when: es_use_repository | |||
register: redhat_elasticsearch_install_from_repo | |||
notify: restart elasticsearch | |||
until: '"failed" not in redhat_elasticsearch_install_from_repo' | |||
until: not redhat_elasticsearch_install_from_repo.failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work for you?
For me locally and the jenkins job fails consistently with
18:44:52 TASK [elasticsearch : RedHat - Install Elasticsearch] **************************
18:45:02 fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not redhat_elasticsearch_install_from_repo.failed' failed. The error was: error while evaluating conditional (not redhat_elasticsearch_install_from_repo.failed): 'dict object' has no attribute 'failed'"}
It seems to be failing because the "failed" result doesn't exist until the task has finished. I just tried until: redhat_elasticsearch_install_from_repo.rc == 1
and it seems to work as expected.
Indeed it does not work for new machines. On the machine I initially tried it worked... but Ansible ran there many times. Though I was wondering what is the rationale behind using |
Here is the commit where it was added e3c71a7f . My best guess would be that it was added to retry for any network connection errors. I can imagine it is pretty annoying if you are deploying a massive 100+ node cluster and a single node fails because of a network error with yum.
Given the huge amount of different users and use-cases for this playbook I would rather leave it in unless there is a really compelling reason to remove it. I don't feel like this adds any extra complexity or maintenance to the playbook. Removing it could only cause some users to have connecting issues during yum installs. Were you able to test if |
Ok, pretty valid points. Yes, w/ the change it is ok. [It still fails but because of #426 .] |
Just fixed 6.2 support with #431 If you rebase on master you should be able to test properly now. |
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
Got this issue when installing ElasticSearch-5.5.3 on CentOS 7, (+ had to also enable the repo "base") on the server for BitBucket Datacenter. Fixed with |
Hit this while installing 6.2.4 with ansible 2.5.2. |
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
jenkins test this please |
All tests are passing now! Thanks for this @eRadical! |
ignore it until the following issue fixed elastic/ansible-elasticsearch#423
I was getting
fatal: [redacted_hostname]: FAILED! => {"attempts": 5, "changed": false, "failed": true, "msg": "", "rc": 0, "results": ["elasticsearch-6.2.1-1.noarch providing elasticsearch is already installed"]}
because „failed” was all the time present in the response.