Getting Synology DSM to work with OpenVSwitch + Bond + Multi-VLANs + Jumbo Packet + Virtualization
If you are here, you probably share a similar adventure with me:
- Sinked a fortune for the all mighty Synology NAS;
- Let's LACP bond this baby, make the maximum use of the powerfulness;
- Ooops, I have a dedicated management VLAN, what do I do?
- Googling around a bit, found some instructions, created additional
ifcfg-bond0.(tag)
files; - It works! Although a bit of hacking was needed, not bad.
- Googling around a bit, found some instructions, created additional
- Hmm, there is some VM Manager / Docker add-on, seems quite interesting;
- Heck why not, this thing got an Xeon inside, better make full use of the power I paid for;
- Plan and architect the VMs / containers, the whole-new-world became quite attractive!
- Install VM Manager / Docker, which turned on OpenVSwitch, and the world instantly darkens...
- Your existing bonded multiple VLAN setup is blown into oblivion;
- And follow the old method of re-creating ifcfg file couldn't bring anything back.
- Double, triple Googling around, finally found another guide;
- Apparently Synology has never considered the scenario of OpenVSwitch + Bond + Multi-VLANs;
- As a result, the
/etc/rc.network
has to be modified, adding additional logic; - Following the new guide, you are able to establish multiple bonded VLAN interfaces, success!
- But as you continue deploying your new architecture, you realized that the hack was incomplete, and you are bugged by the following problems:
- Except the first bond interface, all other interfaces are shown as "disconnected";
- As a result, you are not able to use other interfaces from GUI, such as when you install MailPlus server;
- Unable to use jumbo packet on any of the bond interfaces when you start a VM;
- As soon as you start any VM, all bond interfaces' MTU reset to 1500.
- Except the first bond interface, all other interfaces are shown as "disconnected";
This modification extends existing guide on OpenVSwitch + Bond + Multi-VLANs, and tries to cover all corner aspects, enabling a workable OpenVSwitch + Bond + Multi-VLANs + Jumbo Packet + Virtualization solution.
- Disconnected interface problem:
- Through trial-and-error, I found having a non-empty "SLAVE_LIST" with interfaces that is in up state makes DSM think the bond interface is connected;
- But of course I noticed some logic in
/etc/rc.network
that acks upon values of "SLAVE_LIST"; - So more logic is injected into
/etc/rc.network
to avoid trigger operations on the wrong scenario.
- Jumbo Packet problem:
- OpenVSwitch automatically reduce MTU on all interfaces on a bridge to the minimum value allowed across all attached interfaces;
- When you start a VM, a tap interface is attached to the bridge which all your other network interfaces attach to, and the VM Manager seems only want to create ones with MTU of 1500. As a result all interfaces are forced downgrade to non-jumbo mode;
- Of course you could manually upgrade each tap interface's MTU, and get back the larger MTU after upgrading all taps; But as soon as you create / start another VM, the problem is back again.
- In order to permenantly solve the problem, you cannot have the VM manager attach taps to the same bridge.
- Luckily, OpenVSwitch already have a solution that is applicable for this scenario;
- Create another bridge, which is not backed by any physical interface, dedicated for VMs;
- Create a pair of peering "patch" interfaces, with each end on your existing bridge (with physical bond interface) and VM bridge;
- Let the VM manager automatically attach taps to the VM bridge instead;
- Note that, if you want you VMs to use jumbo packets, you still need to manually adjust the MTU of the tap interfaces. But at least your physical NAS will always respect your MTU configurations now.
- Luckily, OpenVSwitch already have a solution that is applicable for this scenario;
- Again, more logics are injected into
/etc/rc.network
for ability to create patch interfaces.
- Refactored the code related to MTU and dhcp client;
- Cleaned up some mixed use of local and non-local variables;
- Fixed a typo.
WARNING: Messing with network RC script can prevent DSM from brining up network interfaces, and hence "brick" the device. It is the best that you have a working alternative means to access the DSM commandline console (for example, have an adapter and figure out pins to hook up with the serial console) before performing any of the operations below.
- Fetch the patch file from the directory matching your current DSM version, e.g.
6.2.2-24922
; - Apply the patch file:
cp /etc/rc.network /volume1/backup/ patch /etc/rc.network rc.network.patch
- If you have already modified yours, don't worry, you can get back the original copy from
/etc.defaults/rc.network
;
- If you have already modified yours, don't worry, you can get back the original copy from
- Refer to sample configurations in the examples directory, adjust for your needs.