Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DS1815+ | Fast Reads, Slow Writes #10

Open
irishj opened this issue Mar 19, 2020 · 22 comments
Open

DS1815+ | Fast Reads, Slow Writes #10

irishj opened this issue Mar 19, 2020 · 22 comments
Labels
performance Performance issue

Comments

@irishj
Copy link

irishj commented Mar 19, 2020

Description of the problem

Trying this driver on a DS1815+(avoton) and the driver works fine (installs, new connecton is shown and IP is retrieved via DHCP) and I can access the host and ping it without issue.

Read speeds are great, 270Mb/s when copying a file via SMB from the diskstation to another network host.

The issue I'm having is that writes are really slow, 60Mb/s - 72Mb/s max. Data is being written onto a SHR volume (btrfs). Writes with onboard nics max out at 113Mb/s. Tried from two different network clients, both running 10G ethernet.

Description of your products

DS1815+
Linux DS1815 3.10.105 #24922 SMP Wed Jul 3 16:37:24 CST 2019 x86_64 GNU/Linux synology_avoton_1815+
DSM 6.2.2-24922 Update 4
QNAP QNAUC5G1T

Description of your environment

Connected to unmanaged switch
- PC: QLogic BCM57810
- Hub: NetGear XS508M
- Cable : CAT7

Output of dmesg command

[73302.925741] aqc111 3-3:1.0 eth4: register 'aqc111' at usb-0000:04:00.0-3, QNAP QNA-UC5G1T USB to 5GbE Adapter, 24:5e:be:42:e1:8b
[73302.938767] usbcore: registered new interface driver aqc111
[73303.364455] IPv6: ADDRCONF(NETDEV_UP): eth4: link is not ready
[73310.877090] aqc111 3-3:1.0 eth4: Link Speed 5000, USB 3
[73310.891136] IPv6: ADDRCONF(NETDEV_CHANGE): eth4: link becomes ready
[73313.351874] init: winbindd main process (18161) killed by TERM signal
[73313.799412] init: nmbd main process (26033) killed by TERM signal
[73313.841300] init: iscsi_pluginserverd main process (25673) killed by TERM signal
[73313.859656] init: iscsi_pluginengined main process (25651) killed by TERM signal

Output of lsusb command

|__usb1          1d6b:0002:0310 09  2.00  480MBit/s 0mA 1IF  (ehci_hcd 0000:00:16.0) hub
  |__1-1         8087:07db:0002 09  2.00  480MBit/s 0mA 1IF  ( ffffffd1ffffffb2ffffffdbffffffad) hub
    |__1-1.1     f400:f400:0100 00  2.00  480MBit/s 200mA 1IF  (Synology DiskStation 6500653E076DAE11)
|__usb2          1d6b:0002:0310 09  2.00  480MBit/s 0mA 1IF  (Linux 3.10.105 etxhci_hcd-170202 Etron xHCI Host Controller 0000:04:00.0) hub
|__usb3          1d6b:0003:0310 09  3.00 5000MBit/s 0mA 1IF  (Linux 3.10.105 etxhci_hcd-170202 Etron xHCI Host Controller 0000:04:00.0) hub
  |__3-3         1c04:0015:0101 00  3.20 5000MBit/s 896mA 1IF  (QNAP QNAP QNA-UC5G1T USB to 5GbE Adapter 99I09132)

Output of ifconfig -a command

bond0     Link encap:Ethernet  HWaddr 00:11:32:XX:XX:XX 
          inet addr:192.168.0.16  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:216343250 errors:0 dropped:85 overruns:85 frame:0
          TX packets:384503303 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:98030316905 (91.2 GiB)  TX bytes:550134835990 (512.3 GiB)

bond1     Link encap:Ethernet  HWaddr 00:11:32:XX:XX:XX  
          inet addr:192.168.0.17  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:210932 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4748 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:25428193 (24.2 MiB)  TX bytes:435276 (425.0 KiB)

docker0   Link encap:Ethernet  HWaddr 02:42:C4:15:0D:92  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:c4ff:fe15:d92/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:82657022 errors:0 dropped:0 overruns:0 frame:0
          TX packets:45401025 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:119131995899 (110.9 GiB)  TX bytes:13470339171 (12.5 GiB)

docker020 Link encap:Ethernet  HWaddr 5E:6D:99:73:71:E0  
          inet6 addr: fe80::5c6d:99ff:fe73:71e0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:249 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:51380 (50.1 KiB)

docker36a Link encap:Ethernet  HWaddr 6E:0A:55:1F:0D:9E  
          inet6 addr: fe80::6c0a:55ff:fe1f:d9e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5319 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7567 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:304786 (297.6 KiB)  TX bytes:488410 (476.9 KiB)

docker5f9 Link encap:Ethernet  HWaddr BE:09:91:DE:93:69  
          inet6 addr: fe80::bc09:91ff:fede:9369/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:214765 errors:0 dropped:0 overruns:0 frame:0
          TX packets:243470 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:33839198 (32.2 MiB)  TX bytes:50721912 (48.3 MiB)

docker898 Link encap:Ethernet  HWaddr 22:0E:A2:FC:78:CF  
          inet6 addr: fe80::200e:a2ff:fefc:78cf/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:188328 errors:0 dropped:0 overruns:0 frame:0
          TX packets:189303 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:27542844 (26.2 MiB)  TX bytes:55997208 (53.4 MiB)

docker9ae Link encap:Ethernet  HWaddr 7A:4D:4C:94:BF:26  
          inet6 addr: fe80::784d:4cff:fe94:bf26/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5417 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7902 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2193700 (2.0 MiB)  TX bytes:543419 (530.6 KiB)

dockera67 Link encap:Ethernet  HWaddr 9E:06:C1:F4:52:BC  
          inet6 addr: fe80::9c06:c1ff:fef4:52bc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:159249 errors:0 dropped:0 overruns:0 frame:0
          TX packets:234762 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:14930495 (14.2 MiB)  TX bytes:529522579 (504.9 MiB)

dockerc49 Link encap:Ethernet  HWaddr DE:4C:AD:04:71:47  
          inet6 addr: fe80::dc4c:adff:fe04:7147/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:32696359 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19130962 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:46605581747 (43.4 GiB)  TX bytes:4887145441 (4.5 GiB)

dockered3 Link encap:Ethernet  HWaddr 66:53:5D:4E:EE:52  
          inet6 addr: fe80::6453:5dff:fe4e:ee52/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2271931 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1670187 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2738586589 (2.5 GiB)  TX bytes:2050102685 (1.9 GiB)

eth0      Link encap:Ethernet  HWaddr 00:11:32:57:EA:85  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:28289049 errors:0 dropped:0 overruns:0 frame:0
          TX packets:278859999 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:37414663751 (34.8 GiB)  TX bytes:421379136339 (392.4 GiB)

eth1      Link encap:Ethernet  HWaddr 00:11:32:57:EA:85  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:188054200 errors:0 dropped:85 overruns:85 frame:0
          TX packets:105643298 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:60615653088 (56.4 GiB)  TX bytes:128755690567 (119.9 GiB)

eth2      Link encap:Ethernet  HWaddr 00:11:32:57:EA:87  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:132191 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4748 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:20037288 (19.1 MiB)  TX bytes:435276 (425.0 KiB)

eth3      Link encap:Ethernet  HWaddr 00:11:32:57:EA:87  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:78741 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:5390905 (5.1 MiB)  TX bytes:0 (0.0 B)

eth4      Link encap:Ethernet  HWaddr 24:5E:BE:XX:XX:XX  
          inet addr:192.168.0.42  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::265e:beff:fe42:e18b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:342 errors:1 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:51524 (50.3 KiB)  TX bytes:32952 (32.1 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:676820 errors:0 dropped:0 overruns:0 frame:0
          TX packets:676820 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:117180797 (111.7 MiB)  TX bytes:117180797 (111.7 MiB)

sit0      Link encap:IPv6-in-IPv4
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

If you have any suggestions on possible causes here it would be appreciated. I'm available if you need any further information or testing to be conducted.

Thanks !

@irishj irishj changed the title DS1815+ | Driver Becomes Unresponsive During iPerf3 or File Transfer DS1815+ | Fast Reads, Slow Writes Mar 19, 2020
@bb-qq
Copy link
Owner

bb-qq commented Mar 21, 2020

eth4 Link encap:Ethernet HWaddr 24:5E:BE:XX:XX:XX
inet addr:192.168.0.42 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::265e:beff:fe42:e18b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Please try Enabling Jumbo-frame. Jumbo-frame may increase performance, especially with low-performance CPUs.

Also, have you encounter disconnection of the network under the QNAUC5G1T during the test? Some user reports this disconnection issue with DS1815+.

@irishj
Copy link
Author

irishj commented Mar 21, 2020

I'd prefer not to enable JF, as I'd need to enable it on all my network devices.
Speed is fine on Windows 10, 370Mb/s max transfer from another host via SMB.
I tried the same thing on a DS412+ and have the same speed issue as on the DS1815+ (transfer speeds are identical)

Disconnection issues, yes I've experienced them. Works fine for iPerf testing, even with multiple streams, but put some SMB traffic through and the transfer will freeze, then the connection is lost. Driver still running after this occurs and if you stop and restart the driver, the connection comes back.

@bb-qq
Copy link
Owner

bb-qq commented Mar 22, 2020

I'd prefer not to enable JF, as I'd need to enable it on all my network devices.

Indeed, switch to Jumbo-Frame is painful work when there are many devices. But it can isolate problems whether CPU bottle-neck or others.

Generally, creating transactions with USB devices is heavy work to CPU and using Jumbo-Frame helps reducing a number of transactions. In my environment(DS918+), it was observed that enabling Jumbo-Frame increases throughput and lowering CPU load.

Furthermore, if the disconnection issue has not occurred in the Iperf test, the issue might be also caused by CPU load. So I much appreciate it if you enable Jumbo-Frame temporally among limited devices to test.

Lastly, changing the SMB protocol level may affect throughput because encryption mode is determined from this configuration.
https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/AdminCenter/file_winmacnfs_win

@scyto
Copy link

scyto commented Mar 24, 2020

Not sure if this is related, i just connected one of the trend adapters (using a 10gbps C > A adapter) to my DS1815+ and used HTML speed test docker container. This does full 1gbps in both directions on regular LAN.

With the adapter i get 2gbps down (which i consider good given the avoton platform) however up is only 500mbps -which is odd as this isn't writing to disk, its purely a memory operation.

What could cause this disparity?

@scyto
Copy link

scyto commented Mar 24, 2020

Jumbo frame fixed this for me, i needed to set on:

  1. my PC with one of these USB adapter
  2. on the interface in DSM that also had one of these usb adapters connected to it
  3. on every port on each of 3 switches the traffic went through

What i don't understand is why this makes a diff on a html speed test.!?
image

@bb-qq
Copy link
Owner

bb-qq commented Mar 28, 2020

Probably this difference is caused by the reason in the past comment and some other conditions. (Eg, write buffer size of the ethernet adaptor, overheads of creating a USB transaction, etc...)

Generally, creating transactions with USB devices is heavy work to CPU and using Jumbo-Frame helps reducing a number of transactions. In my environment(DS918+), it was observed that enabling Jumbo-Frame increases throughput and lowering CPU load.

@brimur
Copy link

brimur commented Mar 31, 2020

Hi guys, I'm also using this driver on a DS1815+. Thanks very much @bb-qq for the driver. I thought a couple of the QNAP devices ( qna-uc5g1t ) might breath new life into it since it has no PCIe slot. Anyway when I connected the two dongles to Windows 10 machines I actually see 400MB/s. When I connect one dongle to my DS1815+ and the other end to one of the Win 10 machines and I copy something from the NAS to the Wind 10 machine I am only seeing 45 - 50 MB/s. If I go over my LAN to the NAS I am getting 115MB/s. I also enabled Jumbo Frames (9K) on both the DS1815+ and the Windows 10 machine and the most I see is about 60MB/s.

To be honest that is not even the worst part for me. I got the QNA dongles to increase the bandwidth between my DS1815+ and my ESXi hosts for NFS datastores. I can connect to the DS1815+ from the ESXi host for about 1 minutes before the DS1815+ shutsdown the connection. The lights even go off in the dongle connected to the DS1815+ and its says in DSM -> Network that the cable has been disconnected. The only way to fix this is to go to DSM > Packages and stop/run your package again. Is this the only way to reset the connection?

To verify the QNAP devices were working I connected the Window 10 machine to the ESXi host via the QNAP devices and they worked fine. So I am not sure why that is happening. @bb-qq are there logs on the DS1815+ that I could look at to see if there were errors. DMESG just tells me things are being killed...

[12543.401148] usb 2-2: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
[12670.525781] init: dhcp-client (eth4) main process (32653) killed by TERM signal
[12670.866393] init: winbindd main process (1823) killed by TERM signal
[12676.848887] aqc111 3-4:1.0 eth4: Link Speed 5000, USB 3
[12678.316922] init: dhcp-client (eth4) main process (7532) killed by TERM signal
[12678.589789] init: winbindd main process (7775) killed by TERM signal
[12681.254427] init: winbindd main process (9207) killed by TERM signal
[12686.887692] init: iscsi_pluginserverd main process ended, respawning
[12688.180064] iSCSI:iscsi_target.c:612:iscsit_del_np CORE[0] - Removed Network Portal: 169.254.135.247:3260 on iSCSI/TCP
[12688.193378] iSCSI:iscsi_target.c:520:iscsit_add_np CORE[0] - Added Network Portal: 192.168.137.95:3260 on iSCSI/TCP
[12688.259687] init: iscsi_pluginserverd main process (10880) killed by TERM signal
[12690.230024] init: iscsi_pluginserverd main process (11133) killed by TERM signal
[12690.248030] init: iscsi_pluginengined main process (11126) killed by TERM signal
[12690.838935] init: iscsi_pluginserverd main process (11351) killed by TERM signal
[12690.855612] init: iscsi_pluginengined main process (11336) killed by TERM signal
[12694.017491] init: iscsi_pluginserverd main process (11434) killed by TERM signal
[12694.040528] init: iscsi_pluginengined main process (11419) killed by TERM signal
[12695.976354] init: iscsi_pluginserverd main process (11787) killed by TERM signal
[12695.989116] init: iscsi_pluginengined main process (11775) killed by TERM signal
[12698.887614] usb 2-2: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
[12739.856374] init: dhcp-client (eth4) main process (8621) killed by TERM signal
[12740.430920] init: winbindd main process (11356) killed by TERM signal
[12750.050630] iSCSI:iscsi_target.c:612:iscsit_del_np CORE[0] - Removed Network Portal: 192.168.137.95:3260 on iSCSI/TCP
[12750.062860] iSCSI:iscsi_target.c:612:iscsit_del_np CORE[0] - Removed Network Portal: [fe80::265e:beff:fe4d:a71e]:3260 on iSCSI/TCP


@brimur
Copy link

brimur commented Mar 31, 2020

Just wanted to add some iperf3 stats. When the DS1815+ is the server I get weird results. When the DS1815+ is the client things seem better. Below is the results from the view my Window 10 machine. It was the server first and then it was the client connecting to the DS1815+ (192.168.22.1)

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.22.1, port 51882
[  5] local 192.168.22.20 port 5201 connected to 192.168.22.1 port 51883
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   234 MBytes  1.96 Gbits/sec
[  5]   1.00-2.00   sec   242 MBytes  2.03 Gbits/sec
[  5]   2.00-3.00   sec   244 MBytes  2.05 Gbits/sec
[  5]   3.00-4.00   sec   245 MBytes  2.05 Gbits/sec
[  5]   4.00-5.00   sec   243 MBytes  2.04 Gbits/sec
[  5]   5.00-6.00   sec   244 MBytes  2.05 Gbits/sec
[  5]   6.00-7.00   sec   243 MBytes  2.04 Gbits/sec
[  5]   7.00-8.00   sec   243 MBytes  2.04 Gbits/sec
[  5]   8.00-9.00   sec   243 MBytes  2.04 Gbits/sec
[  5]   9.00-10.00  sec   245 MBytes  2.06 Gbits/sec
[  5]  10.00-10.04  sec  8.59 MBytes  2.01 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.04  sec  2.38 GBytes  2.04 Gbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
iperf3: interrupt - the server has terminated
PS C:\Users\Brimur\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.22.1
Connecting to host 192.168.22.1, port 5201
[  4] local 192.168.22.20 port 51885 connected to 192.168.22.1 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   256 KBytes  2.09 Mbits/sec
[  4]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   4.00-5.00   sec   128 KBytes  1.05 Mbits/sec
[  4]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec
[  4]   9.00-10.00  sec   128 KBytes  1.05 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   512 KBytes   419 Kbits/sec                  sender
[  4]   0.00-10.00  sec   268 KBytes   220 Kbits/sec                  receiver

iperf Done.

@scyto
Copy link

scyto commented Mar 31, 2020

you should be using iperf 3.7 x64.

v3.1.1 has known issues on windows version for high speed networks (you will need to google it , it is not on the regular iperf site for some reason)

also you did set jumbo frame / MTU 9000 on the twop PCs, the synology and all intermediate switches right?

PS get 2gb win10 <> win10 first (i.e validate the adapter, cables and MTUs are all right).

@brimur
Copy link

brimur commented Apr 1, 2020

Thanks, I mentioned in my post above that I tested with 2 x Win10 machines and they had solid connections and I was able to transfer ~400MBs. When I connect one of those machines to my DS1815+ the issues start. The link on both sides says 5Gbps and 9000 MTU but speeds are maxing at 60MBps. (115MBps over normal 1Gbps home network)

@brimur
Copy link

brimur commented Apr 1, 2020

I installed the HTML5 speed test and ran some speed tests. The picture below compares my LAN connection to the DS1815+ with the AQC11 connection. The HTML5 speedtests show ~1Gbit for the LAN connection as expected and 2GBps - 2.4GBps for the AQC11.
Moving to file transfer I am seeing ~100MBps over 1Gbps LAN but only ~60 - 70MBps over the 2-2.5GBps...

All of this is from the same machine, just using different routes,

image

@scyto
Copy link

scyto commented Apr 1, 2020

was the file copy a single file or many smaller ones?
In the bond above was it many small files - if it was i am not sure if SMB actually uses both sides of the LAG / Bond these days?

@brimur
Copy link

brimur commented Apr 2, 2020

It was a single 8GB ISO file used for both file copies. The source Windows 10 machine only has a single 1Gb NIC anyway so would not benefit from the bond on the NAS but also the bond wouldn't affect a single stream either. On the plus side, I installed NFS client services on the Windows 10 machine and I was able to get the file copy speed on the AQC11 up to 120MBps so NFS is at least working better than SMB/CIFS but still far away from 3Gbps or even 2Gbps

@scyto
Copy link

scyto commented Apr 2, 2020

I am pretty much out of ideas. Sounds like you are doing every thing right. I will try and repro a file copy. One last thought, old versions of SMB are pretty chatty, have you tried forcing SMB3 on the synology?

I never bothered with this test because i know i would be limited by the disk speed, i.e no way to hit that upper limit on a read from a single disk on a PC not to metion the write speed on the DS. But i will give it ago at the weekend.

@brimur
Copy link

brimur commented Apr 2, 2020

Thanks for all the suggestions. I had file services set to SMB3 and SMB2 with Large MTU. As mentioned it's fine when I go over my local network, the 1Gb link is maxed out. It's when I use the 5Gb link that I see the slow down. I even swapped the cables in case it was a dodgy cable but no difference.

@mervincm
Copy link

I have had more luck on my 1815+ using the realtek based 2.5GBE NICS. over 200MB/sec read and write, you might want to try those.
but honestly multichannel SMB is more stable and even faster, so after testing I don't use it any more.

@alexatkinuk
Copy link

alexatkinuk commented May 17, 2020

I've actually started having what may be the same issue recently on a Fedora 32 client. Was fine on Fedora 31 but now I'm getting random freezes but nothing in syslog. It seems to be worse on one USB 3.0 controller than the other (my motherboard has a second controller for two extra ports). Even when it works, it takes so long to come up on boot that my NFS mounts fail.

Its worth noting I'm already using Jumbo Frames of 6000 as that performs faster than 9000 in my testing. You also don't have to use Jumbo Frames across the whole LAN, I only set it on the NAS which has 10Gbit and this client, this does not impact other clients (if it did, the Internet wouldn't ever work on a Jumbo Frames LAN) thanks to MTU path discovery.

Its also worth noting I upgraded from a Realtek 2.5Gbit as that has its own issues, spamming syslog with up/down messages as the Linux drivers don't officially support it. I'm tempted to just go back to the on-board Gigabit at this point as it seems the Linux drivers for these USB NICs suck.

@mervincm
Copy link

The internet has routers that do packet segmentation. You don’t have that within a lan. Things get messy with inconsistent packet size on a lan and the common advice is to avoid it or you get hard to diagnose issues.on the other hand A USB NIC on a low powered atom CPU is kinda the perfect place to use jumbo frames if you can do it right.

@marcosscriven
Copy link

@brimur - did you get anywhere with this?

I’m getting very similar issues: #43

@brimur
Copy link

brimur commented May 16, 2021

@brimur - did you get anywhere with this?

I’m getting very similar issues: #43

No I gave up on it, yours is a newer synology so you might hand better luck but the link on my ds1815 just kept dying after a few minutes at 5Gb. If I set it to 2.5 it stays alive a bit longer and at 1gb it seems to be stable but that's no use to me. Also this driver cannot be installed on the new DSM 7

@marcosscriven
Copy link

Also this driver cannot be installed on the new DSM 7

Why is that? I read they were removing some drivers, but surprised if missing drivers can’t be compiled and installed just like this one?

That said, could be a moot point as I can’t get this anywhere near working.

@brimur
Copy link

brimur commented May 16, 2021

Also this driver cannot be installed on the new DSM 7

Why is that? I read they were removing some drivers, but surprised if missing drivers can’t be compiled and installed just like this one?

That said, could be a moot point as I can’t get this anywhere near working.

They are locking down 3rd party low level access. You might still be able to install it via ssh but it really was too unstable for me to put in any more effort.

@bb-qq bb-qq added the performance Performance issue label Aug 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance issue
Projects
None yet
Development

No branches or pull requests

7 participants