Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate single-threaded, async HTTP.request performance #1032

Closed
quinnj opened this issue Mar 30, 2023 · 4 comments
Closed

Investigate single-threaded, async HTTP.request performance #1032

quinnj opened this issue Mar 30, 2023 · 4 comments

Comments

@quinnj
Copy link
Member

quinnj commented Mar 30, 2023

Stemming from this discourse discussion

@agoodm
Copy link

agoodm commented Apr 2, 2023

Thanks for mentioning this discussion here. After lots of digging I believe we have managed to narrow down MbedTLS being the culprit since running my example with socket_type_tls=OpenSSL.SSLStream as well as the same URLs without HTTPS resulted in a dramatic speedup.

I know that at this point other limitations arising from using MbedTLS have been discussed both here and elsewhere, but maybe it would be a good idea to change the default to OpenSSL instead of MbedTLS?

@PallHaraldsson
Copy link
Contributor

PallHaraldsson commented Apr 4, 2023

HTTP.jl already depends on MbedTLS.jl and OpenSSL.jl. I'm confused why. Likely since you can choose either option, but since the latter is faster, should it default to it (as in the solution at the discourse thread) and the MbedLTS dependency be simply dropped? [That dependency is strictly not the same as MbedTLS in Julia itself, and the package should neither rely on that?]

[Going forward, some package like OpenSLL.jl would need auto-update mechanism, or even better provide nothing, just refer to the OS, that likely provides it for you?]

@jebej
Copy link
Contributor

jebej commented Apr 5, 2023

For what it's worth, mbedTLS fixed a performance regression in v2.28.3 (see comment towards bottom of release notes), it might be worth checking if that affects the timings.

quinnj added a commit that referenced this issue Apr 20, 2023
  * Finally achieving my dream of moving the connection pool out of HTTP; it's going to live in the [ConcurrentUtilities.jl](JuliaServices/ConcurrentUtilities.jl#18)
    package instead. In short, it had no unit tests, scant documentation, and was generally a pain to deal with in HTTP. We also discovered at least 2 major issues with the current
    implementation during a deep dive into performance and issue diagnosis, including:
      * If a ConnectError was thrown while creating a connection, a "permit" to the pool was permanently lost; get enough of them and the entire connection pool grinds to a halt :grimace:
      * Being a threadsafe structure, the pool had the poor choice of holding the pool lock for the duration of trying to get a permit _and making new connections_. This meant that in the
        case of a connection taking a long time to be made, we were essentially holding the rest of the pool hostage. This is totally unnecessary and can cause big performance issues in
        really heavy workloads where there's lots of contention on the pool for managing requests.
    The new implementation in ConcurrentUtilities.jl solves both these problems while being about 1/4th the LOC of the previous implementation. And it has unit tests! yay!
    All in all, I think #1033 and #1032 should both be mostly resolved by these changes/updates.
  * Relatedly, we're adjusting the API for connection pools to allow the user to pass in their _own_ connection pool to be used for that request (to check for a connection to reuse and to
    return the connection to after completion). A pool can be constructed like `HTTP.Pool(; max::Int)` and passed to any of the `HTTP.request` methods like `HTTP.get(...; pool=pool)`.
    HTTP has its own global default pool `HTTP.Connections.POOL` that it uses by default to manage connection reuse. The `HTTP.set_default_connection_limit!` will still work as long as it
    is called before any requests are made. Calling it _after_ requests have been made will be a no-op. The `connection_limit` keyword arg is now formally deprecated and will issue a warning
    if passed. I'm comfortable with a full deprecation here because it turns out it wasn't even really working before anyway (unless it was passed/used on _every_ request and never changed).
    So instead of "changing" things, we're really just doing a proper implementation that now actually works, has better behavior, and is actually controllable by the user.
  * Add a try-finally in keepalive! around our global IO lock usage just for good house-keeping
  * Refactored `try_with_timeout` to use a `Channel` instead of the non-threaded `@async`; it's much simpler and seems cleaner
  * I refactored a few of the stream IO functions so that we always know the number of bytes downloaded, whether in memory or written to
    an IO, so we can log them and use them in verbose logging to give bit-rate calculations
  * Added a new `logerrors::Bool=false` keyword arg that allows doing `@error` logs on errors that may otherwise be "swallowed" when doing retries;
    it can be helpful to sometimes be able to at least see what kinds of errors are happening; also cleaned up our error handling in general so we don't lose backtraces which fixes #1003.
  * Added lots of metrics around various time spent in various layers, read vs. write durations, etc. These can be enabled, and stored in the request context, by passing `observelayers=true`
    This mostly resolves #1025 and #1019.
  * Fixed some missing keyword args that either weren't correct in the inline docs or weren't included in the client.md docs
  * Removed a few client-side layers (BasicAuth, DebugRequest, Canonicalize, etc.) since their implementations were _really_ trivial and moved their functionality into a single HeadersRequest
    layer where we now do all the header-related work during the request. This has the affect of _drastically_ reducing the exploding backtraces we now have when doing requests because of
    our "functional style" client-side layering.
  * Fixed #612 by ensuring `keepalive` is included in our `connectionkey` computation so only connections that specified `keepalive` will be reused if its passed the same value on subsequent requests
quinnj added a commit that referenced this issue Apr 27, 2023
* Summary of changes in this PR:
  * Finally achieving my dream of moving the connection pool out of HTTP; it's going to live in the [ConcurrentUtilities.jl](JuliaServices/ConcurrentUtilities.jl#18)
    package instead. In short, it had no unit tests, scant documentation, and was generally a pain to deal with in HTTP. We also discovered at least 2 major issues with the current
    implementation during a deep dive into performance and issue diagnosis, including:
      * If a ConnectError was thrown while creating a connection, a "permit" to the pool was permanently lost; get enough of them and the entire connection pool grinds to a halt :grimace:
      * Being a threadsafe structure, the pool had the poor choice of holding the pool lock for the duration of trying to get a permit _and making new connections_. This meant that in the
        case of a connection taking a long time to be made, we were essentially holding the rest of the pool hostage. This is totally unnecessary and can cause big performance issues in
        really heavy workloads where there's lots of contention on the pool for managing requests.
    The new implementation in ConcurrentUtilities.jl solves both these problems while being about 1/4th the LOC of the previous implementation. And it has unit tests! yay!
    All in all, I think #1033 and #1032 should both be mostly resolved by these changes/updates.
  * Relatedly, we're adjusting the API for connection pools to allow the user to pass in their _own_ connection pool to be used for that request (to check for a connection to reuse and to
    return the connection to after completion). A pool can be constructed like `HTTP.Pool(; max::Int)` and passed to any of the `HTTP.request` methods like `HTTP.get(...; pool=pool)`.
    HTTP has its own global default pool `HTTP.Connections.POOL` that it uses by default to manage connection reuse. The `HTTP.set_default_connection_limit!` will still work as long as it
    is called before any requests are made. Calling it _after_ requests have been made will be a no-op. The `connection_limit` keyword arg is now formally deprecated and will issue a warning
    if passed. I'm comfortable with a full deprecation here because it turns out it wasn't even really working before anyway (unless it was passed/used on _every_ request and never changed).
    So instead of "changing" things, we're really just doing a proper implementation that now actually works, has better behavior, and is actually controllable by the user.
  * Add a try-finally in keepalive! around our global IO lock usage just for good house-keeping
  * Refactored `try_with_timeout` to use a `Channel` instead of the non-threaded `@async`; it's much simpler and seems cleaner
  * I refactored a few of the stream IO functions so that we always know the number of bytes downloaded, whether in memory or written to
    an IO, so we can log them and use them in verbose logging to give bit-rate calculations
  * Added a new `logerrors::Bool=false` keyword arg that allows doing `@error` logs on errors that may otherwise be "swallowed" when doing retries;
    it can be helpful to sometimes be able to at least see what kinds of errors are happening; also cleaned up our error handling in general so we don't lose backtraces which fixes #1003.
  * Added lots of metrics around various time spent in various layers, read vs. write durations, etc. These can be enabled, and stored in the request context, by passing `observelayers=true`
    This mostly resolves #1025 and #1019.
  * Fixed some missing keyword args that either weren't correct in the inline docs or weren't included in the client.md docs
  * Removed a few client-side layers (BasicAuth, DebugRequest, Canonicalize, etc.) since their implementations were _really_ trivial and moved their functionality into a single HeadersRequest
    layer where we now do all the header-related work during the request. This has the affect of _drastically_ reducing the exploding backtraces we now have when doing requests because of
    our "functional style" client-side layering.
  * Fixed #612 by ensuring `keepalive` is included in our `connectionkey` computation so only connections that specified `keepalive` will be reused if its passed the same value on subsequent requests

* Update based on new Pool chnages

* Updates

* cleanup

* Put back in exception unwrapping

* Address PR review
@quinnj
Copy link
Member Author

quinnj commented Apr 27, 2023

Resolved in #1034 . Folks will still have to manually switch to OpenSSL for now, but I think we're getting close to switching that on by default.

@quinnj quinnj closed this as completed Apr 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants