-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SegmentPool improvements #352
Conversation
core/jvm/src/SegmentPool.kt
Outdated
private val HASH_BUCKET_COUNT_L2 = (HASH_BUCKET_COUNT / 2).coerceAtLeast(1) | ||
|
||
private val SECOND_LEVEL_POOL_TOTAL_SIZE = | ||
System.getProperty("kotlinx.io.pool.size.bytes", "0").toInt().coerceAtLeast(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with @e5l that it makes sense to bump up the default size up to several megs (probably, 4Mb).
Not sure if it should be done on Android.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What really puzzles me here is how it affects the end users.
Because it seems like the characteristics of the IO (which is the way to do a lot of stuff) will depend on whether or not Ktor is used in the project.
I don't mind it as a palliative intermediate solution as the ground for further benchmarking, but definitely not something to keep as as
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked how BufferReadWriteByteArray
performs depending on the presence of the second-level pool and the working set size.
If WSS is within L1-pool, there's no difference.
If WSS doesn't fit into L1-pool, it might be beneficial not to use L2-pool (i.e. it's faster w/o L2-pool) until some threshold size, but then L2-pool boosts the performance.
But it's hard to judge by looking at synthetic benchmarks only, as they provide zero to no evidence regarding GC-related effects on overall performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can always revert it and also provide some alternative mechanism to adjust the pool size, so those who really demands it (Ktor), could tune it when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Round of cosmetics to speed up the convergence, still digging into the segment pool details
Co-authored-by: Vsevolod Tolstopyatov <qwwdfsad@gmail.com>
else -> "4194304" // 4MB | ||
} | ||
|
||
private val SECOND_LEVEL_POOL_TOTAL_SIZE = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when is the first usage of class SegmentPool
?
Is there a high chance of programmatically changing this value before its first reading?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when is the first usage of class SegmentPool?
On a first write into a buffer
Is there a high chance of programmatically changing this value before its first reading?
I think it's achievable by updating the system property before SegmentPool is initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's achievable by updating the system property before SegmentPool is initialized.
if this is acceptable, then fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any severe consequences of that. That's the only kind of dynamic configuration (#352 (comment)) we can provide for now. :)
Currently, SegmentPool implementation on JVM works more like a cache than a pool: it has a relatively small fixed size, segments are taken from the pool on an effort basis, and it's easy to make a segment non-recyclable (meaning that it will never be returned to the pool).
This behavior is totally fine for many applications, but it's not for Ktor.
This PR addresses several issues:
Segment.recycle
, it'll be returned to the pool.0
by default. All failed attempts to take or recycle segments in the first tier use the second tier as a fallback. Unlike the first tier, if the second tier's bucket is empty,take
will scan all other buckets. The scan is also performed onrecycle
to a full bucket.All these changes reduce the allocation count by combining more precise work with the pool and optionally extensible pool size.
Two-level design helps with avoiding performance penalties induced by the scan technique employed by the second tier when that tier is not used and, at the same time, allows using the same sharding technique as in the first tier enhanced with the bucket steeling techniques.