Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow snabbnfv to pass all traffic on an Ethernet port to a single virtual machine #604

Closed
wants to merge 5 commits into from

Conversation

mwiget
Copy link
Contributor

@mwiget mwiget commented Aug 29, 2015

The port configuration for snabbnfv requires a valid mac_address, which is used to correctly place incoming packets to a virtual queue. I'm in need of having all traffic on a given Ethernet port reaching a single virtual machine, e.g. a virtual router, in order to support all kinds of protocols (pppoe, mpls, etc).

This patch checks if a mac_address is set in the port definition of the config file and sets vmdq to false if its omitted and to true (matching the original behavior) if a mac_address is set.

The following port definition will accept all packets from the wire and pass it to the virtual machine:

# cat xe0.cfg
return {
  {
    port_id = "xe0",
  }
}

Does this make sense beyond my very specific application that I have in mind, which basically turns an Ethernet port into trunk mode into a single virtual machine?

Ideally I'd like to be able to cover not only the use case to treat a port as a trunk interface, but also cover use cases where one can serve multiple virtual machines by using the vlan tag only instead of mac_address plus vlan_id.

@lukego
Copy link
Member

lukego commented Aug 29, 2015

This seems like a valuable feature to me. Likewise pure VLAN dispatch.

I wonder whether we should restrict snabbnfv config to only having one port in this setup? If we support multiple ports with a mix of VMDq/non-VMDq then we would have to document how this is expected to behave and keep that consistent across current/future NICs that we support. Could be simpler at least immediately to report an error if this feature is used in a config file that contains multiple ports.

Side-thought: Looking ahead I think it is likely that we will end up replacing VMDq with a software solution instead. The hardware offloads always seem to become limiting and fragile at a certain point. Based on experiments with #603 I am even becoming hopeful that we could write a software traffic dispatcher for 100G ethernet ports. However, VMDq is the solution we have today and it does work fine up to this point.

@javierguerragiraldez
Copy link
Contributor

As a barely-related side-thought: one thing I'd like to do in the near future is the split-app driver we've talked some time ago: one app to configure the NIC, and one or more extra apps pushing packets in and out of the chip's rings.

This would allow different CPU cores to handle different packet streams, not only for VMDq, but also for RSS. The nice thing is that it wouldn't require any communication between those processes, removing many obvious scalability limits. 100G could be doable with current JIT schemes.

@lukego
Copy link
Member

lukego commented Aug 30, 2015

@javierguerragiraldez I am experimenting with the "split driver" approach on #561 (Intel I350/1G) now. I will resubmit that when the design is more mature. Meanwhile I added some comments this morning to explain the design a bit. Basic concept is to move decisions from happening automatically inside the app (e.g. assignment of TX/RX queues) to being explicitly configured in the app network. So when you create an intel1g app you tell it which TX queue to use (if any) and which RX queue to use (if any) and whether to initialize the NIC (or assume that somebody else does this). It seems promising to me and it should make basic I/O orthogonal to queue setup (VMDq / RSS / etc) as you say. If this design makes sense we could even add 82599 support to that driver?

The downside though of hardware offloads like VMDq and RSS is that they are not universal. Marcel mentions being interested in more exotic protocols and we can't really depend on commodity NICs to handle dispatching on PPPoE, MPLS, GTP, and so on. So I like the idea of making the hardware capabilities available, and for some applications they will be exactly the right thing, but I am anticipating that people will prefer well-optimized and flexible software dispatching in the future. Juho Snellman made a similar remark in a great recent presentation: Mobile TCP Optimization - lessons learned in production. The product that Juho is describing is actually an ancestor of Snabb Switch: I worked on that with Juho immediately before starting Snabb Switch and that was the first time we ditched vendor libraries and wrote our own drivers.

lukego added a commit to lukego/snabb that referenced this pull request Sep 14, 2015
Added the requirement that only one port is defined when promiscuous
mode is being used. This is simply because the best semantics for
other cases is not immediately clear to me and I want to avoid
adopting a dodgy behavior that we will break later.
@eugeneia
Copy link
Member

eugeneia commented Oct 5, 2015

Closing because this was merged together with #618.

@eugeneia eugeneia closed this Oct 5, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants