-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling very long connection timeouts #962
Comments
Sorry for delay, I've finally got a time for the issue.
P.S. please note you should wrap every P.P.S. My output for your snippet is:
It obviously shows: cancellation by timeout is performed by per-task basis. |
@asvetlov I believe your output (with all those TimeoutError at the end) is showing a bug, which I have also been encountering. As soon as one URL hits a timeout (whether you set it using Timeout() or not), future get attempts will time out immediately. I'm not sure I'm doing things right, and had to look for a get-multiple-pages example (having one of these in the docs would be nice), but I've used basically the same procedure as dev169, and ended up with a similar problem as you both appear to have had, so I don't think it's can just be me doing something wrong. Using 1.1.6. |
I have the same problem under 1.2.0. Definitely still a bug. I don't even know if @asvetlov is still monitoring this, because he closed it in July. I'll check back in a while before opening a dupe. |
Long story short
Having a big list of hosts to connect to, some of them timeout and
aiohttp
waits eternally. What is the proper way to handle this case?Expected behaviour
aiohttp
should have some kind of connection timeout.Actual behaviour
aiohttp
hangsSteps to reproduce
An extreme example:
What is the proper way to handle this? Example code:
The above will wait many minutes until it timeouts.
Using
aiohttp.Timeout
seemed like a good candidate to take care of this, but I've found that once the timeout is reached, all other futures throw anconcurrent.futures._base.TimeoutError
, so it appears to work "globally". So if we modify fetch like so:We now get:
I've also tried setting
conn_timeout
inaiohttp.TCPConnector
, but since I assume it is reused, once the timeout comes every future after the cutoff throws.The obvious alternative is
concurrent.futures.ThreadPoolExecutor
andrequests
, which I've used and works fine, but i'm sure there must be something wrong with my approach regardingasyncio
andaiohttp
.Any suggestions?
Your environment
Python 3.5.1
Ubuntu 14.04
The text was updated successfully, but these errors were encountered: