-
-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TCP performance better than MPTCP #307
Comments
Hi Monika,
Nice, 5 years!
Do you mind re-doing the tests with a more recent kernel installed on both the client and the server side? (e.g. 6.0 or ideally by compiling our Note that on Ubuntu, packages are available there: https://kernel.ubuntu.com/~kernel-ppa/mainline/ (e.g. take the v6.0 version)
Why is it expected to have 1.7-1.8 Kbps on the WiFi side? That looks like a bad situation for MPTCP: it should handle that but trying to use it will most probably be a cost in term of buffer and CPU utilisation and maybe the upstream kernel doesn't handle this "extreme" case well yet. Could you also redo some tests with MPTCP but only one path?
Of course not. The goal is to get the best for all the different paths but this is a quite complex job to do. If you still have a big difference with a recent kernel when switching from 1 to 2 paths with this very bad second path and if you are not limited by buffers/CPU, then there is something to investigate. Cheers, |
Addendum: could you also please specify the link speed ? You mentioned fast ethernet but the measured plain TCP B/W you reported is well above the fast ethernet limit (100Mbps). Are there any other flows running on the same link? |
Firstly thank you @matttbe & @pabeni for an immediate response. I am yet to try the new kernel version however tried tuning the TCP buffer parameters and usage of multiple parallel connections. This is exactly an extreme scenario as rightly said by Matt. The WiFi connection is used by close to say 50 ppl in office. So its kind of a poor link. On the other hand, the ethernet connection is Cat6A 500 MHz which is capable of supporting upto 10 Gbps. I was just curious how MPTCP would behave in case of a good and bad subflow. It would be nice if it can choose good over bad subflow instead trying to use both which would degrade the performance. Please ignore the numbers I shared previously. Those were from wireshark (conversations/TCP/A->B(bps)).
I changed the TCP buffer parameters to the following:
Next when I test with newer kernel version, I will also log throughput per interface. I generally use nicstat. Do you have any recommendation? Also, I would like to double confirm if the scheduler used is BLEST? Note: My intention of these tests are to find out how MPTCP reacts to different scenarios so that I can draw some usecases. |
Thank you for these first tests.
Which parameters do you use with IPerf3? Is it an upload/download? Regarding the throughput, do you look at the view from the receiver side?
I don't know what's the latency on the different networks you use but you might need to go higher than 12MB for the buffers.
Mmh, strange. Then going higher than 12MB should not change anything (and I guess you are not limited by a very small
Yes, that would be good not to investigate on an old kernel.
It depends what you want to see but
There is only one packet scheduler in this upstream kernel so far. |
By chance, do you have any more input to share? |
Hi @matttbe, Firstly, sorry about the delayed response. I didn't proceed with this any further for two reasons
Now, I have another issue with backup flow case. I will discuss it on a new thread. Thanks for your time and support! |
I got the following WARNING message while removing driver(ds2482): ------------[ cut here ]------------ do not call blocking ops when !TASK_RUNNING; state=1 set at [<000000002d50bfb6>] w1_process+0x9e/0x1d0 [wire] WARNING: CPU: 0 PID: 262 at kernel/sched/core.c:9817 __might_sleep+0x98/0xa0 CPU: 0 PID: 262 Comm: w1_bus_master1 Tainted: G N 6.1.0-rc3+ #307 RIP: 0010:__might_sleep+0x98/0xa0 Call Trace: exit_signals+0x6c/0x550 do_exit+0x2b4/0x17e0 kthread_exit+0x52/0x60 kthread+0x16d/0x1e0 ret_from_fork+0x1f/0x30 The state of task is set to TASK_INTERRUPTIBLE in loop in w1_process(), set it to TASK_RUNNING when it breaks out of the loop to avoid the warning. Fixes: 3c52e4e ("W1: w1_process, block or sleep") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20221205101558.3599162-1-yangyingliang@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Of course sure, thank you! |
Hi Matt & others,
I have been working with MPTCP since 2017. So I was only using out of tree implementation so far. Now that upstream implementation is supporting most of the basic requirements of MPTCP, I tried using it.
I have 2 ubuntu 22.04 systems (Kernel - 5.15) with a fast Ethernet direct connection and a busy wlan connection between them. The Ethernet connection is kept as the primary flow and wlan as subflow (usedip mptcp signal/subflow configurations on the server & client)
Following are the results obtained using mptcpize iperf3
MPTCP throughput- eth: 60-70 Mbps and wlan0: 1.7-1.8 Kbps (this was expected)
However only TCP communication (single path over eth) gave better results
TCP throughput - 160-170Mbps
One of the main design goal of MPTCP is to ensure that it achieves atleast same throughput as that of TCP (over its best path). Is this getting violated?
I never experienced with out-of-tree implementation.
Kindly request your comments on the same, Matt.
Cheers,
Monika
The text was updated successfully, but these errors were encountered: