-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mem limiter #368
Mem limiter #368
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't get very far, so some comments may be uninformed. I'll continue tomorrow...
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #368 +/- ##
==========================================
+ Coverage 88.45% 88.55% +0.09%
==========================================
Files 19 20 +1
Lines 5120 5348 +228
==========================================
+ Hits 4529 4736 +207
- Misses 591 612 +21
|
source/s3_client.c
Outdated
aws_s3_client_lock_synced_data(client); | ||
client->synced_data.trim_buffer_pool_task_scheduled = false; | ||
aws_s3_client_unlock_synced_data(client); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could move trim_buffer_pool_task_scheduled
from synced_data
to threaded_data
and not bother with locks here, right? we only ever schedule this and run it from the client-work-thread?
Co-authored-by: Michael Graeb <graebm@amazon.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix & ship
source/s3_buffer_pool.c
Outdated
struct aws_s3_buffer_pool_ticket *aws_s3_buffer_pool_reserve(struct aws_s3_buffer_pool *buffer_pool, size_t size) { | ||
AWS_PRECONDITION(buffer_pool); | ||
|
||
if (buffer_pool->has_reservation_hold) { | ||
return NULL; | ||
} | ||
|
||
if (size == 0) { | ||
AWS_LOGF_ERROR(AWS_LS_S3_CLIENT, "Could not reserve from buffer pool. 0 is not a valid size."); | ||
aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); | ||
return NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Real code never checks the error-code from aws_s3_buffer_pool_reserve()
. Real code assumes it failed due to the memory limit being temporarily exceeded.
- so if size was 0, the meta-request would get stuck perpetually trying to acquire a ticket
- similarly, if
size > mem_limit
the meta-request would be stuck perpetually trying to acquire a ticket. I realize you adjusted the client'smax_part_size
to make this impossible, but it might be worth an ASSERT or FATAL_ASSERT here in case client code changes in the future and something slips through
Anyway, we should probably do one of these things:
- real code calling aws_s3_buffer_pool_reserve() continues to assume that it can only fail when the mem limit is temporarily exceeded. And the reserve() function here FATAL_ASSERTs that the size is legal (not 0, not exceeding mem_limit)
- OR real code calling aws_s3_buffer_pool_reserve() checks to differentiate the expected "waiting for memory" from other errors that terminate the meta-request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, good point. size == 0 or > mem_lim is really an indication that something wrong went on before, so for simplicity sake, i think fatal assert makes more sense.
… limit (#429) **Issue:** `aws_s3_meta_request_write()` must write to a buffer immediately if the data is less than part-size. Currently, it uses a buffer hooked up the to default allocator ([code here](https://github.com/awslabs/aws-c-s3/blob/cc06c419448b40417caa7b587f61bb4d8b4c08c1/source/s3_meta_request.c#L2260-L2262)). We'd like to get these buffers from the [buffer-pool](#368), to reduce memory fragmentation and resident set size (RSS). The problem is: the buffer-pool maintains strict memory limits, and won't allow further allocations when that limit is hit. But `aws_s3_meta_request_write()` MUST get a buffer immediately, or else the system can deadlock (see description in PR #418) **Description of changes:** Add `aws_s3_buffer_pool_acquire_forced_buffer()`. These buffers can be created even if they exceed the memory limit. **Future work:** Modify `aws_s3_meta_request_write()` to use this new function **Additional thoughts:** - Anyone using `aws_s3_meta_request_write()` should limit the total number of uploads like: `max-uploads = memory-limit / part-size`. That was the case even before this PR.
Issue #, if available:
aws/aws-sdk-java-v2#4034
Description of changes:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.