-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alter HTTP requests to stop using chunked transfer encoding #1077
Conversation
Android notifier sizes
Generated by 🚫 Danger |
b8399a7
to
629a937
Compare
629a937
to
d4ba501
Compare
@tomlongridge @twometresteve I've tagged for a preliminary review - I'm planning on doing some more detailed performance testing on the changeset, but it'd be good to resolve any code changes before that gets too far along. |
Changes LGTM - we just need to conclude discussions based on performance. |
Also LGTM.
Edit: The struck-through point has now been implemented in #1081. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See 1 comment above.
d4ba501
to
52d6a30
Compare
52d6a30
to
d0b3b50
Compare
Updated PR description with details of perf testing - this is now ready for full review. |
[full ci]
[full ci]
[quick ci]
[full ci]
[full ci]
Use MazeRunner v4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[quick ci]
Various updates for flaking scenarios
Goal
Alters HTTP requests so that they no longer use chunked transfer encoding, and instead build the request body up front. This fixes two problems:
Changeset
Content-Type
header to be included in error/session APIDeliveryParams
object for consistency with other headersByteArray
, and generated the SHA-1 digest from thisByteArray
which circumvents any possible change in the payloadByteArray
to the HTTP request directly, rather than streaming it with chunked transfer encodingsetFixedLengthStreamingMode
to avoid creating an unnecessary buffer withinHttpUrlConnection
. Testing with unusual unicode characters revealed that this needed to be aByteArray
rather than aString
to get the correct body lengthOutOfMemoryError
and attempted to persist the payload to disk, as this operation takes a smaller amount of memory and is likely to succeed if memory is constrained for large payloadsTesting
Relied on existing unit and E2E test coverage.
Performance
For the default payload sent by the example app (around 17Kb), there is no discernible difference in performance for this changeset. Both v5.5.0 and the current changeset use around 0.2Mb to deliver one payload.
Datadog has confirmed that the average payload size received by the pipeline is 16-24Kb, so for the majority of use cases this change will have no performance impact.
Additional load testing
To perform stress testing, v5.5.0 and the current changeset were load tested using the following snippet which generates a 900Kb payload containing a very large amount of metadata:
The additional JVM memory required to deliver one payload was as follows:
Before changeset: 1.3Mb memory usage
With changeset: 2.9Mb memory usage
Further stress testing generated 20 of these payloads simultaneously on 5 different threads (an unrealistic scenario):
Before changeset: 12.5Mb memory usage
With changeset: 18.2Mb memory usage
The majority of allocations for both changesets were for
java/lang/String
andjava/util/concurrent/ConcurrentHashMap$Node
, so it seems that JSON serialization is the dominating factor for memory usage rather than buffering the payload. It’s worth noting that manually triggering GC freed up short-lived references created during JSON serialization, so the JVM would trigger GC calls if the app was running low on memory when attempting to allocate a buffer.It's also worth noting that the example app's payload size is around 17Kb, which is effectively the same size as the default buffer (16kb) in the previous implementation that uses streams. This suggests that for regular use without large amounts of metadata there should be no performance impact.