-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert readable stream pull() to promise-returning #272
Conversation
3b8ea56
to
63306b6
Compare
<li> Call-with-rethrow Call(<var>stream</var>@\[[error]], <b>undefined</b>, «<var>pullResult</var>.\[[value]]»). | ||
<li> Return <var>pullResult</var>. | ||
<li> Upon fulfillment of <var>stream</var>@\[[pullingPromise]], call-with-rethrow | ||
CallReadableStreamPull(<var>stream</var>). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest a design like WritableStreamAdvanceQueue, i.e. set only one fulfillment callback on creation of pullingPromise, and in the callback, check whether CallReadableStreamPull was ever called since the last time we invoked underlyingSource.pull.
CallReadableStreamPull is invoked on any event that may cause shouldApplyBackpressure to be flipped. We shouldn't set a fulfillment callback every time when CallReadableStreamPull is called.
63306b6
to
8b2b726
Compare
Per @tyoshino's comment at #272 (comment)
@tyoshino updated to be more conservative about calling pull() per your feedback. PTAL. |
@@ -344,7 +351,7 @@ Instances of <code>ReadableStream</code> are created with the internal slots des | |||
<li> Set <b>this</b>@\[[readyPromise]] and <b>this</b>@\[[closedPromise]] to new promises. | |||
<li> Set <b>this</b>@\[[queue]] to a new empty List. | |||
<li> Set <b>this</b>@\[[state]] to <code>"waiting"</code>. | |||
<li> Set <b>this</b>@\[[started]], <b>this</b>@\[[draining]], and <b>this</b>@\[[pulling]] to <b>false</b>. | |||
<li> Set <b>this</b>@\[[started]], <b>this</b>@\[[draining]], and <b>this</b>@\[[pulLScheduled]] to <b>false</b>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
L -> l
5803279 looks good. Please think of this scenario:
Even if the |
See discussion in #185 for background. This allows more convenient integration with promise-returning underlying source functions, removing the need for `.catch(error)` as seen in the modified example. It also rationalizes the slightly-strange way in which pull() is mutexed based on whether enqueue() has been called: instead, it is now mutexed so as to not be called again until the previous iteration's promise has fulfilled. This gives more direct control to the underlying source, and allows them to deal more easily with cases where there are just no chunks to enqueue. Although superficially it seems this may reduce performance for cases where underlying sources can enqueue many chunks synchronously, this is not actually the case, as can be seen from the modified test illustrating such a scenario. If an underlying source can enqueue multiple chunks synchronously, then it should just do so! It shouldn't wait to be pull()ed, as the previous test was doing.
Yeah. I tried to address this with the commit message. If the underlying source has more chunks available synchronously, it should just enqueue them. Even if the backpressure signal says the queue is full, it should use that to stop any flow, but still enqueue any chunks it already has. Whereas, if the underlying source can't get any more chunks without an asynchronous delay, there will be no problem anyway. Make sense? Going to merge but happy to make any updates if e.g. I am misunderstanding #272 (comment) |
Per @tyoshino's comment at #272 (comment)
8b2b726
to
f4e2492
Compare
Looks good. Thanks!
OK |
See discussion in #185 for background.
This allows more convenient integration with promise-returning underlying source functions, removing the need for
.catch(error)
as seen in the modified example.It also rationalizes the slightly-strange way in which pull() is mutexed based on whether enqueue() has been called: instead, it is now mutexed so as to not be called again until the previous iteration's promise has fulfilled. This gives more direct control to the underlying source, and allows them to deal more easily with cases where there are just no chunks to enqueue.
Although superficially it seems this may reduce performance for cases where underlying sources can enqueue many chunks synchronously, this is not actually the case, as can be seen from the modified test illustrating such a scenario. If an underlying source can enqueue multiple chunks synchronously, then it should just do so! It shouldn't wait to be pull()ed, as the previous test was doing.