-
Notifications
You must be signed in to change notification settings - Fork 16
Conversation
Hi! We could increase the read size or use aiohttp.StreamReader.readchunk(). But I would like to understand the problem first.
I can understand this, but 512 bytes does not sound that small to me. The chunk write is performed per loop cycle, but I guess the target directory is fast tmpfs, so yes, the loop will run pretty fast.
This is the part I do not understand. The Are you sure that the file system you are writing too is large enough? When does this happen? How much of the bundle is written until the timeout is triggered? What bundle file size are we talking about? Could you write the bundle to a destination not backed by RAM? What's happening then? Regards, |
Bonus question: could you provide the full timeout traceback? |
Thanks for the fast reply. :)
Agreed, it's a bit strange. :)
Yes, we have enough space.
Not immediately (on the fist chunks); it seems to happen after about ~40-50MB.
Currently, ~70MB.
Same outcome.
I'll try to follow up with it tomorrow. Right now we don't have a full traceback (we only see the timeout log and the the update is retired after 5 minutes). |
Hi, some follow up. :)
How to fix this? I'm not sure, the timeout is AFAIK at 0.1s in aiohttp... Fetching this exception and just retry might be an option but not ideal (right?). Example that crashes
Stacktrace
TCP ZeroWindows |
Hi, thanks for the follow-up. I tried to reproduce this using your example, but no luck. I successfully downloaded huge files from local and remote HTTP servers. I noticed that the CPU usage is pretty high, but as you said, your example is pretty much copied from the aiohttp Client API. Maybe it is worth to open an issue with your example code at https://github.com/aio-libs/aiohttp? I think it's actually an aiohttp client issue. If your bundle download works with..
..I guess we should merge that. |
Oh, you already opened an issue there, nice! |
Looking at the aiohttp issue it seems we are running into the aiohttp.ClientSession 5 minutes timeout. I think we want to create the aiohttp.ClientSession with something like:
|
@livioso Would you like to make a PR for this and test your use case with the new code? |
Lets do this (a) set the timeout properly (e.g. remove the context manager |
Replaces read(chunk_size) with readchunk which reads as much as it can. This is the better approach than to choose a chunk_size that works well under all circumstances (HTTP, HTTPS). Also, sets the timeouts properly. Before the context helper didn't do anything since the requests would timeout after 5 minutes anyway. Now, the timeouts are respected as expected. Signed-off-by: Livio Bieri <livio@livio.li>
support for ClientTimeout was added in 3.3.1 Signed-off-by: Livio Bieri <livio@livio.li>
👋 Let me know if you have any inputs / concerns; this works for us. :) |
@BastianStender Did you have time to go over it yet? :) |
@livioso Sorry for keeping you waiting. Unfortunately I can't dive into this until (hopefully) next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Successfully tested on our Demo hardware.
Wow, this is amazingly faster than the previous approach!
Hi again o/
I think this is a bug, or at least in our case (^1) it caused a
aiohttp timeout
while reading chunks.When the chunk_size is that small (512 bytes), the loop in which we
read
is executed so fast that our CPU usage goes up to 100% since we practically don'tawait
anymore. This caused a hanging state ultimately resulting in a timeout (after 60s). Increasing thechunk_size
to 512kb instead of 512 bytes seemed reasonable (or was this chosen to be sooo small on purpose?).^1: We use an ARM Cortex A7, single core (imx6ul)