Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(incremental): do not ignore errors from filtered streams #4142

Closed
wants to merge 1 commit into from

Conversation

yaacovCR
Copy link
Contributor

@yaacovCR yaacovCR commented Jul 9, 2024

depends on #4141

Early execution may result in non-completed streams after publishing is completed -- these streams must be closed using their return methods. When this occurs, we should pass through any error that occurs in the clean-up function instead of swallowing errors.

Swallowing errors is a bad practice that could lead to memory leaks. The argument in favor of swallowing the error might be that because the stream was "executed early" and the error does not impact any of the returned data, there is no "place" to forward the error.

But there is a way to forward the error, and that's on the next() call that returns { value: undefined, done: true } to end the stream. We will therefore appropriately send all the data and be able to pass through an error. Servers processing our stream should be made aware of this behavior and put in place error handling procedures that allow them to forward the data to clients when they see a payload with { hasNext: false } and then filter any further errors from clients. Servers could also postpone sending { hasNext: false } until the clean-up has been performed, and then optionally send information about the error in the extensions key, if that made sense. It would be up to the server!

@yaacovCR yaacovCR requested a review from a team as a code owner July 9, 2024 12:01
@yaacovCR yaacovCR requested review from robrichard and removed request for a team July 9, 2024 12:01
@yaacovCR yaacovCR added the PR: bug fix 🐞 requires increase of "patch" version number label Jul 9, 2024
Copy link

netlify bot commented Jul 9, 2024

Deploy Preview for compassionate-pike-271cb3 ready!

Name Link
🔨 Latest commit d6377eb
🔍 Latest deploy log https://app.netlify.com/sites/compassionate-pike-271cb3/deploys/668ec947943e6100087872d0
😎 Deploy Preview https://deploy-preview-4142--compassionate-pike-271cb3.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link

github-actions bot commented Jul 9, 2024

Hi @yaacovCR, I'm @github-actions bot happy to help you with this PR 👋

Supported commands

Please post this commands in separate comments and only one per comment:

  • @github-actions run-benchmark - Run benchmark comparing base and merge commits for this PR
  • @github-actions publish-pr-on-npm - Build package from this PR and publish it on NPM

@yaacovCR yaacovCR force-pushed the do-not-ignore-errors branch 4 times, most recently from ed9caee to c4c07f7 Compare July 9, 2024 18:22
Early execution may result in non-completed streams after publishing is completed -- these streams must be closed using their return methods. When this occurs, we should pass through any error that occurs in the clean-up function instead of swallowing errors.

Swallowing errors is a bad practice that could lead to memory leaks. The argument in favor of swallowing the error might be that because the stream was "executed early" and the error does not impact any of the returned data, there is no "place" to forward the error.

But there is a way to forward the error, and that's on the next() call that returns { value: undefined, done: true } to end the stream. We will therefore appropriately send all the data and be able to pass through an error. Servers processing our stream should be made aware of this behavior and put in place error handling procedures that allow them to forward the data to clients when they see a payload with { hasNext: false } and then filter any further errors from clients. Servers could also postpone sending { hasNext: false } until the clean-up has been performed, and then optionally send information about the error in the extensions key, if that made sense. It would be up to the server!
@yaacovCR
Copy link
Contributor Author

yaacovCR commented Jul 11, 2024

There are 2-6 cases in which we might want to call .return() on an asyncIterator being used to iterate through a list.

  1. If a non-nullable list item errors, and we abort iteration.
    1. without the use of the @stream directive or with erroring indices prior to hitting initialCount
    2. without the use of the @stream directive or with erroring indices prior to hitting initialCount, even when the entire list is not included in the response anyway because of null bubbling from somewhere else...
    3. with the use the @stream directive with errors within streamed items, i.e. for nonreconcilable stream items.
    4. when streams are "filtered" i.e. when early execution results in asyncIterators that are encountered but not actually send to the client.
  2. When using incremental delivery, when .return() is called on the subsequentResults iterator explicitly, we should forward that to the underlying stream asyncIterator.
    1. for streams that would have been sent to the client
    2. for streams that would have been "filtered"

In each of these scenarios, we would have to think about how to forward errors from .return() to the client if we would like to do so. Right now, the current state is that we are not tackling how to return errors from .return() at all with respect to (1), and that's probably how it should stay until users of graphql-js can weight in on what this API should look like.

@yaacovCR yaacovCR closed this Jul 11, 2024
@yaacovCR yaacovCR deleted the do-not-ignore-errors branch July 11, 2024 13:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR: bug fix 🐞 requires increase of "patch" version number
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant