Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configurable max packet size #400

Merged
merged 1 commit into from
Nov 6, 2023
Merged

Configurable max packet size #400

merged 1 commit into from
Nov 6, 2023

Conversation

ihlar
Copy link
Contributor

@ihlar ihlar commented Aug 29, 2023

This PR adds the possibility of configuring the maximum QUIC packet size. Increasing the packet size to make full use of MTU has beneficial performance improvements. It also simplifies future potential implementation of Path MTU Discovery.

src/aioquic/quic/recovery.py Outdated Show resolved Hide resolved
@jlaine jlaine force-pushed the max-packet-size branch 2 times, most recently from 8361d46 to 7da12be Compare November 4, 2023 13:28
Copy link

codecov bot commented Nov 4, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (b097478) 100.00% compared to head (bacc1e8) 100.00%.
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main      #400   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           22        22           
  Lines         4542      4552   +10     
=========================================
+ Hits          4542      4552   +10     
Files Coverage Δ
src/aioquic/asyncio/server.py 100.00% <100.00%> (ø)
src/aioquic/quic/configuration.py 100.00% <100.00%> (ø)
src/aioquic/quic/connection.py 100.00% <100.00%> (ø)
src/aioquic/quic/packet_builder.py 100.00% <100.00%> (ø)
src/aioquic/quic/recovery.py 100.00% <100.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jlaine jlaine changed the title Max packet size Configurable max packet size Nov 4, 2023
@jlaine
Copy link
Contributor

jlaine commented Nov 4, 2023

From reading the RFCs it looks as though we should be using max_datagram_size to be consistent with their terminology.

We should also lower the default value to 1200 in line with RFC 9000, as pointed out by @Arvind2222

@jlaine jlaine force-pushed the max-packet-size branch 3 times, most recently from beaff3e to c926bcb Compare November 4, 2023 14:24
@@ -539,8 +545,11 @@ def datagrams_to_send(self, now: float) -> List[Tuple[bytes, NetworkAddress]]:
builder.max_flight_bytes = (
self._loss.congestion_window - self._loss.bytes_in_flight
)
if self._probe_pending and builder.max_flight_bytes < PACKET_MAX_SIZE:
builder.max_flight_bytes = PACKET_MAX_SIZE
if (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure this is correct. I think this should be exactly 1200 bytes.

@jlaine
Copy link
Contributor

jlaine commented Nov 4, 2023

On the whole this PR is now in a good place. I'm still wondering about naming though, as the max_datagram_size:

  • Only applies to what we send.
  • Is very close to the max_udp_datagram_size QUIC setting, which controls what we are willing to receive. It is not currently possible to set this via QuicConfiguration.

=> Should our new setting reflect the fact it applies to egress? Like max_egress_datagram_size?

@jlaine jlaine force-pushed the max-packet-size branch 3 times, most recently from b985ed0 to cc41779 Compare November 4, 2023 15:07
Also set the default maximum datagram size to 1200 bytes, in line with
RFC 9000.

Co-authored-by: Jeremy Lainé <jeremy.laine@m4x.org>
@jlaine jlaine merged commit 2ae1ad4 into aiortc:main Nov 6, 2023
25 checks passed
@Arvind2222
Copy link

@jlaine Was doing a retro, I think if we can keep these Configs in good place since the RFC is still new and might change in future. Your call !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants