Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perf: Use fewer slices in while loop #2

Merged
merged 1 commit into from
Sep 12, 2017

Conversation

Beanow
Copy link
Contributor

@Beanow Beanow commented Sep 11, 2017

Should boost performance significantly for larger incoming buffers.
This fixes the issue for me at ipfs-inactive/js-ipfs-unixfs-engine#187

The way I implemented my test, a single buffer of 57621555 bytes comes in and gets split into ~220 blocks. By not slicing the remainder to a new buffer 219 times it saves a lot of buffer allocation churn to process a buffer as large as this.

Should boost performance significantly for larger incoming buffers.
@@ -30,13 +30,15 @@ module.exports = function block (size, opts) {
bufferedBytes += data.length
buffered.push(data)

var b = Buffer.concat(buffered)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we canavoid this allocation as well, by just queuing new buffers directly from the array I think

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually we can't, never mind. the overhead of slicing into the buffers wouldn't be worth it probably

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. It's probably only worth looking at with a specific test case where it suffers from that.

In general I assume the new worst case scenario will be when you require 2 buffers from your source on average to create one buffer out and are receiving a high volume of buffers. For example 130B buffers in and 250B buffers out. For every pair it would concat once, then slice twice.

The only way I see that improving is by tracking the buffer offsets as well and keeping a little more data in the cache than strictly necessary. Never using slice or concat at all, but allocating the full output buffer at once and doing (partial) copies into that.

@dignifiedquire dignifiedquire merged commit a002ea7 into dignifiedquire:master Sep 12, 2017
@dignifiedquire
Copy link
Owner

Thanks! I will do a rrlease tomorrow

@daviddias
Copy link
Collaborator

@Beanow did you see perf improvements?

@Beanow
Copy link
Contributor Author

Beanow commented Oct 14, 2017

@diasdavid night and day difference. As per ipfs-inactive/js-ipfs-unixfs-engine#187

@Beanow Beanow deleted the perf-fix branch October 14, 2017 07:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants