Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow unique connect + read timeouts #1524

Closed
thehesiod opened this issue Jan 3, 2017 · 8 comments
Closed

allow unique connect + read timeouts #1524

thehesiod opened this issue Jan 3, 2017 · 8 comments
Labels

Comments

@thehesiod
Copy link
Contributor

Long story short

aiohttp should allow users to easily support unique read + connector timeouts

Actual behaviour

Currently you can set a connect timeout by overriding the TCPConnector.

Currently to specify a read timeout you need to either pass a timeout parameter to ClientSession.request (this timeout is used for both the read_timeout [https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client.py#L173] and the connector timeout [https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client.py#L175]), potentially overriding the TCPConnector's timeout, or you need to subclass the response_class and it's init method...and specify a max of read_timeout + connect_timeout to ClientSession.response timeout parameter.

So basically the ClientSession.request timeout is overloaded to clamp existing read_timeout and existing conn_timeout making it complicated for clients. Instead I think clients would mostly want to set the read/conn timeout on the Session and not worry about each request's timeout.

Discussion for resolution

There are a variety of ways to potentially solving this:

  1. Have a separate read_timeout + conn_timeout passed to ClientSession.request (declaring the existing timeout either a read/conn timeout.
  2. Keep the existing timeout and treat it as a real "max" timeout, but allow for a unique read_timeout. This means the connector timeout can be removed by checking to see if the request timeout is <= the connector timeout.
  3. Switch ClientSession.timeout to be None by default, and instead specify the TCPConnector + request_context's default timeouts.

I think ideally the ClientSession would have an init-able read_timeout, and the existing request_timeout can be treated as a wrapped max timeout.

thoughts? btw this is discussed in part in #1180

initial attempt at fixing this issue via idea 2: #1523

@adamrothman
Copy link

Thanks for this!

@thehesiod
Copy link
Contributor Author

thoughts @asvetlov @jettify ?

@thehesiod
Copy link
Contributor Author

status is waiting on someone to approve + merge my change. Unfortunately/fortunately I don't have perm ;)

@fafhrd91
Copy link
Member

merged

@adamrothman
Copy link

@thehesiod So if I understand #1523 correctly, passing None for the timeout will now result in no timeout being applied? Or do I have it wrong?

@thehesiod
Copy link
Contributor Author

passing None for timeout will result in using the underlying TCPConnector's connect timeout (default unlimited) and the ClientSession's new read_timeout (default unlimited)

@fafhrd91 fafhrd91 reopened this Feb 8, 2017
@fafhrd91
Copy link
Member

fafhrd91 commented Feb 8, 2017

i refactored how client handle timeouts.

  1. request timeout = max("session timeout", "connector timeout")
  2. all IO times are cumulative

request timeout == "time to open connection" + "time to send request" + "time to read all data from response"

@lock
Copy link

lock bot commented Oct 29, 2019

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that new issue.

@lock lock bot added the outdated label Oct 29, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants