Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TASK [kubernetes/preinstall : Hosts | populate inventory into hosts file] failed with error messasge "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'" #5217

Closed
gashev opened this issue Sep 28, 2019 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gashev
Copy link

gashev commented Sep 28, 2019

Environment:

  • Cloud provider or hardware configuration:
    bare metal

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):

Linux 3.10.0-957.21.3.el7.x86_64 x86_64
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Version of Ansible (ansible --version):
ansible 2.7.12
  config file = /tmp/kubespray/ansible.cfg
  configured module search path = ['/tmp/kubespray/library']
  ansible python module location = /tmp/p3/lib/python3.6/site-packages/ansible
  executable location = /tmp/p3/bin/ansible
  python version = 3.6.8 (default, Aug  7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

Kubespray version (commit) (git rev-parse --short HEAD):

8712bdd

Network plugin used:
calico

Copy of your inventory file:

[all]
node1 ansible_host=10.2.0.2 ip=10.2.0.2
node2 ansible_host=10.2.0.3 ip=10.2.0.3
node3 ansible_host=10.2.0.4 ip=10.2.0.4
node4 ansible_host=10.2.0.5 ip=10.2.0.5
node5 ansible_host=10.2.0.6 ip=10.2.0.6

[kube-master]
node1

[etcd]
node1
node2
node3

[kube-node]
node2
node3
node4
node5

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr

Command used to invoke ansible:

ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml

Output of ansible run:

TASK [kubernetes/preinstall : Hosts | populate inventory into hosts file] ************************************************************************
Saturday 28 September 2019  08:38:13 +0000 (0:00:00.697)       0:00:39.704 **** 
fatal: [node2]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'\n\nThe error appears to have been in '/tmp/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Hosts | populate inventory into hosts file\n  ^ here\n"}
fatal: [node3]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'\n\nThe error appears to have been in '/tmp/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Hosts | populate inventory into hosts file\n  ^ here\n"}
fatal: [node4]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'\n\nThe error appears to have been in '/tmp/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Hosts | populate inventory into hosts file\n  ^ here\n"}
fatal: [node5]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'\n\nThe error appears to have been in '/tmp/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Hosts | populate inventory into hosts file\n  ^ here\n"}
fatal: [node1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'\n\nThe error appears to have been in '/tmp/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Hosts | populate inventory into hosts file\n  ^ here\n"}

Anything else do we need to know:

@gashev gashev added the kind/bug Categorizes issue or PR as related to a bug. label Sep 28, 2019
@paunmihai
Copy link

Any update on this? I face the same issue when I'm trying to install on CentOS 7.5.1804 running in a VirtualBox VM.

@Docteur-RS
Copy link

Duplicate of 4750 I guess...
A true fix would be great. If its even doable...

@champtar
Copy link
Contributor

https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray-defaults/defaults/main.yaml#L393

# Set 127.0.0.1 as fallback IP if we do not have host facts for host
fallback_ips_base: |
  ---
  {% for item in groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([])|unique %}
  {{ item }}: "{{ hostvars[item].get('ansible_default_ipv4', {'address': '127.0.0.1'})['address'] }}"
  {% endfor %}
fallback_ips: "{{ fallback_ips_base | from_yaml }}"

So in some cases, ansible_default_ipv4 is present but doesn't contains address, as show by adding a debug

TASK [kubernetes/preinstall : debug] *********************************************************************************************************************************************************************************************************
Wednesday 22 January 2020  16:50:24 -0500 (0:00:01.189)       0:01:15.581 ***** 
ok: [etienne-ks141] => {
    "ansible_default_ipv4": {}
}

After some testing, it seems ansible look at the source address that the host would use to contact 8.8.8.8
To test it:

ip link add dummy0 type dummy
ip addr add 8.8.8.8/32 dev dummy0
or
ip r add blackhole 8.8.8.8
or
by not having a default route ;)
$ ansible -i <ip>, -m setup <ip> | grep ansible_default_ipv4
        "ansible_default_ipv4": {},

As suggested here
https://medium.com/opsops/ansible-default-ipv4-is-not-what-you-think-edb8ab154b10
a workaround for ansible_default_ipv4 could be

ansible_default_ipv4.address|default(ansible_all_ipv4_addresses[0])

If ansible_default_ipv4 is always present, another fix is (not tested)

hostvars[item]['ansible_default_ipv4'].get('address', '127.0.0.1')

@champtar
Copy link
Contributor

And just read #5394 that explain just what I wrote ...

@cferrera
Copy link

Any update on this?

I have the same issue.

  • Azure
  • CentOS 7
  • ansible 2.9.6
  • KubeSpray 2.12.3
  • Weave
  • 3 master; 3 node;

@champtar
Copy link
Contributor

#5394 is now merged, @gashev is this fixed for you ?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 12, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 11, 2020
@bmelbourne
Copy link
Contributor

bmelbourne commented Aug 23, 2020

@gashev @cferrera
PR #5394 may have fixed this issue and will be included in v2.12.4+ releases.

Can you confirm?

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants