Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vagrant - playbook fails at TASK [download : download_file | Copy file back to ansible host file cache] #5990

Closed
der-ali opened this issue Apr 20, 2020 · 6 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@der-ali
Copy link

der-ali commented Apr 20, 2020

Environment:
Vagrant Version: 2.0.3
vagrant config:

$instance_name_prefix = "kub"
$vm_cpus = 1
$num_instances = 4
$os = "centos"
$subnet = "10.0.20"
$network_plugin = "flannel"
$inventory = "inventory/my_lab"
$etcd_instances=1
$kube_master_instances=1
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Ubuntu 18.04.4 LTS
  • Version of Ansible (ansible --version):
    ansible 2.9.6
  • Version of Python (python --version):
    Python 3.7.5

Kubespray version (commit) (git rev-parse --short HEAD):
Master branch 6e29a47

Network plugin used:
flannel

Output of ansible run:

TASK [download : download_file | Copy file back to ansible host file cache] ****
fatal: [kub-1]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -i /home/ali/.vagrant.d/insecure_private_key -o Port=2222 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L vagrant@127.0.0.1:/home/ali/kubespray_cache/kubeadm-v1.17.5-amd64 /home/ali/kubespray_cache/kubeadm-v1.17.5-amd64", "msg": "Warning: Permanently added '[127.0.0.1]:2222' (ED25519) to the list of known hosts.\r\nReceived disconnect from 127.0.0.1 port 2222:2: Too many authentication failures\r\nDisconnected from 127.0.0.1 port 2222\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}

I tried adding ansible_ssh_extra_args="-o IdentitiesOnly=yes" to the ansible inventory file but after rerunning the playbook i was failing at

TASK [kubernetes/preinstall : check if /etc/dhcp/dhclient.conf exists]
task path: /home/ali/Work/Arconsis/Exp/kubespray/roles/kubernetes/preinstall/tasks/0040-set_facts.yml:92
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/ansible_stat_payload_dAQdVo/ansible_stat_payload.zip/ansible/modules/files/stat.py", line 464, in main
fatal: [kub-1]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "checksum_algorithm": "sha1",
            "follow": false,
            "get_attributes": true,
            "get_checksum": true,
            "get_md5": false,
            "get_mime": true,
            "path": "/etc/dhcp/dhclient.conf"
        }
    },
    "msg": "Permission denied"
}
<127.0.0.1> (1, b'\n{"msg": "Permission denied", "failed": true, "exception": "WARNING: The below traceback may *not* be related to the actual failure.\\n  File \\"/tmp/ansible_stat_payload_AM27TS/ansible_stat_payload.zip/ansible/modules/files/stat.py\\", line 464, in main\\n", "invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/dhcp/dhclient.conf", "get_md5": false, "get_mime": true, "get_attributes": true}}}\n', b'')
<127.0.0.1> Failed to connect to the host via ssh:

it is failing on the same task also on branch tag v2.12.5

@der-ali der-ali added the kind/bug Categorizes issue or PR as related to a bug. label Apr 20, 2020
@der-ali
Copy link
Author

der-ali commented Apr 20, 2020

adding $HOME/.vagrant.d/insecure_private_key to the ssh-agent seems to solve the issue regarding Too many authentication failures

@Miouge1
Copy link
Contributor

Miouge1 commented Apr 20, 2020

Last CI job for Vagrant is green, so I don't think it's something coming from Vagrantfile.

CI is using Vagrant v2.2.7, so I recommend to update to vagrant version

@Miouge1 Miouge1 added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 20, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 19, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 18, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

spaced pushed a commit to spaced/kubespray that referenced this issue Jun 10, 2024
New Features:

NGINX 1.19.2
New configmap option enable-real-ip to enable realip_module
Use k8s.gcr.io vanity domain
Go 1.15
client-go v0.18.6
Migrate to klog v2
Changes:

 kubernetes-sigs#5887 Add force-enable-realip-module
 kubernetes-sigs#5888 Update dev-env.sh script
 kubernetes-sigs#5923 Fix error in grpcbin deployment and enable e2e test
 kubernetes-sigs#5924 Validate endpoints are ready in e2e tests
 kubernetes-sigs#5931 Add opentracing operation name settings
 kubernetes-sigs#5933 Update opentracing nginx module
 kubernetes-sigs#5946 Do not add namespace to cluster-scoped resources
 kubernetes-sigs#5951 Use env expansion to provide namespace in container args
 kubernetes-sigs#5952 Refactor shutdown e2e tests
 kubernetes-sigs#5957 bump fsnotify to v1.4.9
 kubernetes-sigs#5958 Disable enable-access-log-for-default-backend e2e test
 kubernetes-sigs#5984 Fix panic in ingress class validation
 kubernetes-sigs#5986 Migrate to klog v2
 kubernetes-sigs#5987 Fix wait times in e2e tests
 kubernetes-sigs#5990 Fix nginx command env variable reference
 kubernetes-sigs#6004 Update nginx to 1.19.2
 kubernetes-sigs#6006 Update nginx image
 kubernetes-sigs#6007 Update e2e-test-runner image
 kubernetes-sigs#6008 Rollback update of Jaeger library to 0.5.0 and update datadog to 1.2.0
 kubernetes-sigs#6014 Update go dependencies
 kubernetes-sigs#6039 Add configurable serviceMonitor metricRelabelling and targetLabels
 kubernetes-sigs#6046 Add new Dockerfile label org.opencontainers.image.revision
 kubernetes-sigs#6047 Increase wait times in e2e tests
 kubernetes-sigs#6049 Improve docs and logging for --ingress-class usage
 kubernetes-sigs#6052 Fix flaky e2e test
 kubernetes-sigs#6056 Rollback to Poll instead of PollImmediate
 kubernetes-sigs#6062 Adjust e2e timeouts
 kubernetes-sigs#6063 Remove file system paths executables
 kubernetes-sigs#6080 Use k8s.gcr.io vanity domain
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants