Do a blocking flush every 100 calls to put_copy_data #474
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We had a blocking flush in pg-1.3.x at every call to
put_copy_data
. This made sure, that all data is sent until the nextput_copy_data
. In #462 (and pg-1.4.0 to .2) the behaviour was changed to rely on the non-blocking flushs libpq is doing internally. This makes a decent performance improvement especially on Windows. Unfortunately #473 proved that memory bloat can happen, when sending the data is slower than calls toput_copy_data
happen.As a trade-off this proposes to do a blocking flush only every 100 calls.
If libpq is running in blocking mode (
PG::Connection.async_api = false
)put_copy_data
does a blocking flush every time new memory is allocated. Unfortunately we don't have this kind of information, since we don't have access to libpq'sPGconn
struct and the return codes don't give us an indication when this happens. So doing a flush at every fixed number of calls is a very simple heuristic.Fixes #473