-
Notifications
You must be signed in to change notification settings - Fork 600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env #49
Comments
I also observed following things:
|
Thanks for highlighting me -- Kural's right on the money, it sure looks like you don't have a Flannel daemonset running there. I have a reference Flannel daemonset that uses Multus+Flannel that I use often, it's a jinja2 template that I use as a set of ansible playbooks -- so you'll need to kind of manually parse it. Notably, anything that's However, this is the one: https://github.com/redhat-nfvpe/kube-ansible/blob/master/roles/kube-template-cni/templates/multus.yaml.j2 (edit: I realize you're using my gist from last year, but, you might want to try an updated one using this linked sample as it does get some regular use) |
Hello Everyone, Thanks for providing input. I will check my env and will get back to you. However, I would like to let you know that I am able to configure 2 network cards using following config file. I took this file from YYGCui user.: { |
I believe that looks like it's going to use flannel, which will require a daemonset being up, or, at least I assume, when you get back in touch can you post the process you used in whole, e.g. did you require both spinning up the yaml from my own reference with flannel+multus, and then configuring that? Or is that packed inside the yaml? Thanks |
Hi, I appreciate your help and using your new conf file, I am able to configure 2 network cards in the container using multus plug in. Though I have few question and I am going test the same:
|
Hi, Though I am able to see 2 network cards in container, I am unable to ping MACVLAN based Ip from other PODs or hosts. If I check following on the worker node, I do not see any bridge allocated to MACVLAN:
I gave reference of ens4 but it seems that flannel and Macvlan are on same bridge i.e. cni. I used following file:
|
@psaini79 -- do you have any further information on this one? If not, could you please close that out? |
I looked at it sometime back and found the issue was in my env. I found that macvlan bridge did not work on few VMs as some policies was enforced at ARP level which didn't allow me to pass packets outside the VM. Since the issue in my env , I am closing it and will reopen if I test the some new machine. I will add my feedback. |
Bug 1805774: Exposes readinessindicatorfile and uses wait.PollImmediate [backport 4.4]
I am trying to configure 2 network card in a pod for testing purpose. I am using multus plugin and executed following steps:
kubeadm version: &version.Info Major:"1", Minor:"9", GitVersion:"v1.9.1+2.1.5.el7"
I have no idea how to fix it. Also, I have following physical network cards on my master and worker node:
ens3 : 10.0.20.xx
ens4 : 192.168.16.xx
Do I need to change IP series in multus.yaml?
The text was updated successfully, but these errors were encountered: