[s3_endpoint]: avoid problems arising from keeping endpoint reference alive in hash table #205
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request fixes #202 and arose from comments in #203.
The client keeps references to s3_endpoints alive in a hash table. This requires complicated logic which is hard to
get right:
synced_data
mutex twice.The conclusion is that the small gains in performance by hashing a frequently-used endpoint do not
justify the greatly increased complexity, which has caused race conditions, segmentation faults,
deadlocks - which are both difficult to debug and to get right.
Hence remove the hash table and use only one
endpoint_release
function.Also fix the argument when calling
s_s3_endpoint_http_connection_manager_shutdown_callback
ins_s3_endpoint_ref_count_zero
(function requires ans3_endpoint
).By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.