Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto batch enhancer #2621

Closed
wants to merge 9 commits into from
Closed

Conversation

phryneas
Copy link
Member

@phryneas phryneas commented Aug 21, 2022

Mostly for discussion:

This would be my alternative approach to #2599

Downside:

  • it's an enhancer - we would need to add that to the docs and probably have to add it to defaultEnhancers
  • it would be extra code (~470 bytes) to be shipped

Upsides:

  • it could be used by everyone outside of RTKQ
  • this batches a lot more things
  • it could even be made to batch with a larger "batching window"
  • it does not change the shape of existing actions (the alternative approach needs new actions and I'm a bit hesitant towards that as that will break api.endpoint.foo.matchRejected etc.
  • we could remove all references to react-redux batch with a similar result - or all batch-related code in general - for RTKQ users it would probably even mean less overall bundle size, not more.

@phryneas phryneas changed the base branch from master to v1.9-integration August 21, 2022 09:27

const subscribe: typeof store.subscribe = (listener) => {
const wrappedListener: typeof listener = () => notifying && listener()
const unsubscribe = store.subscribe(wrappedListener)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still calls through to store.subscribe so all the sanity checks there still apply. We only keep a Set copy of the subscribers that were added successfully to also be able to notify them on our own.

@codesandbox-ci
Copy link

codesandbox-ci bot commented Aug 21, 2022

This pull request is automatically built and testable in CodeSandbox.

To see build info of the built libraries, click here or the icon next to each commit SHA.

Latest deployment of this branch, based on commit 81e6c1b:

Sandbox Source
Vanilla Configuration
Vanilla Typescript Configuration
rsk-github-issues-example Configuration
@examples-query-react/basic Configuration
@examples-query-react/advanced Configuration
@examples-action-listener/counter Configuration

@netlify
Copy link

netlify bot commented Aug 21, 2022

Deploy Preview for redux-starter-kit-docs ready!

Name Link
🔨 Latest commit 7384e3d
🔍 Latest deploy log https://app.netlify.com/sites/redux-starter-kit-docs/deploys/6301fa93a0aeec0008230b70
😎 Deploy Preview https://deploy-preview-2621--redux-starter-kit-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

@markerikson
Copy link
Collaborator

Hmm. Initial thoughts:

  • I know it's a draft, but a bunch of unit tests are failing, including the one I just added for "minimizes dispatches when components ask for the same data"
  • Conceptually, there's a bunch of ways to "batch" updates with Redux. I wrote a post about this a couple years ago: https://blog.isquaredsoftware.com/2020/01/blogged-answers-redux-batching-techniques/ .
  • This one seems to be most similar to redux-batched-subscribe, -ish, although I guess what it's actually doing is something like: "when see see an action with the special batching meta tag, stop doing any notifications this event loop tick and do them on a timeout"? Erm... maybe not quite that. I'm finding that logic hard to follow, tbh.
  • As best as I understand this, I'm not sure this is necessarily the best approach we'd want to build in for "batching" in general. Also uneasy about having to add this to the store by default.
  • I don't think the other PR really made "breaking" changes to matchRejected. The only case the middleware is looking for and batching together is specifically the rejected case where it's indicating it's due to a request entry already existing in the store. If a user actually calls endpoint.matchRejected in their own code, I would assume that they're looking for cases where the API call actually failed, which would still be dispatched and processed as usual. (Plus, to me it's an implementation detail that we happen to be leveraging the rejected action for adding another subscription entry, rather than part of the public API.)

We can talk more about this when you've got time, but for 1.9 my inclination is to focus on addressing the known perf issue with RTKQ loading large lists with the middleware approach I added, and maybe consider a larger discussion about batching for 2.0.

@phryneas
Copy link
Member Author

phryneas commented Aug 21, 2022

Hmm. Initial thoughts:

  • I know it's a draft, but a bunch of unit tests are failing, including the one I just added for "minimizes dispatches when components ask for the same data"

From how I've interpreted the tests (I've played around with a few of them), there are now two categories of tests that are failing:

  • tests that do even less renders
  • tests that wait for a "loading" state in the UI, while the UI skips the "loading" state and immediately renders the finished state (essentially: the loading state would have been shown for such a short amount of time that it was skipped)

Yes, it's probably a "selective" version of redux-batched-subscribe - actions happen immediately, but in some cases a subscriber notification may be throttled.

although I guess what it's actually doing is something like: "when see see an action with the special batching meta tag, stop doing any notifications this event loop tick and do them on a timeout"? Erm... maybe not quite that. I'm finding that logic hard to follow, tbh.

Not "stop doing any" - an action without the "batching" meta tag will always immediately notify subscribers.
Lemme try to explain.

The idea is quite similar to a "non-branching suspense": we have low-priority updates and immediately-flushing updates. (In the next sentence, you can exchange "rerender" with "notify subscribers", it just gets weird to type).
Low-priority updates are actions that are marked as "batching". If they are encountered, changes are made to the store (dispatch is happening), but the UI is not rerendering immediately.
A timer is started to do that rerender soon. (Unfortunately, queueMicroTask is not cancellable)
Other "batching" actions in the meantime will not start a new timer - that future update is already scheduled.
Normal actions are "high priority". No update without the tag will ever be delayed or skipped in any way. They will always render immediately just using the normal store subscription mechanism.
If a delayed update was scheduled, it is cancelled again - after all, they already happened with the store and will be drawn together with the current update.

  • As best as I understand this, I'm not sure this is necessarily the best approach we'd want to build in for "batching" in general. Also uneasy about having to add this to the store by default.

I'm really open to other "generic" approaches - but I'd love something that is transparent on the reducer/devtools/middleware side.

  • I don't think the other PR really made "breaking" changes to matchRejected. The only case the middleware is looking for and batching together is specifically the rejected case where it's indicating it's due to a request entry already existing in the store. If a user actually calls endpoint.matchRejected in their own code, I would assume that they're looking for cases where the API call actually failed, which would still be dispatched and processed as usual. (Plus, to me it's an implementation detail that we happen to be leveraging the rejected action for adding another subscription entry, rather than part of the public API.)

So far the implementation of matchRejected is matchRejected: isAllOf(isRejected(thunk), matchesEndpoint(endpointName)),. So before the change, those actions matched, and all those reducers, listeners were executed, after it they won't. So yes, it's breaking. I can't personally imagine someone have a good use for those actions that are currently being batched away and most people probably filtered them - but I'm also reluctant for those to just "disappear".
My biggest problem with the approach though: it won't scale. We will definitely not be able to do the same array-action-batching-approach with the "pending" actions, which make up for a similar "chunk" as the "rejected" actions that were happening. We will have the same problem we already kinda have here: the matchPending matcher will suddenly stop working.

We can talk more about this when you've got time, but for 1.9 my inclination is to focus on addressing the known perf issue with RTKQ loading large lists with the middleware approach I added, and maybe consider a larger discussion about batching for 2.0.

I want to avoid a situation where in the next version we'll have to go back and add a second approach that would maybe not have broken current behaviour in the first place - which will leave us with the choice to un-break the rejections by adding yet another mini-breaking change.

Another option could be to add something like this (definitely doesn't have to be this approach, but my gut feeling tells me it's not possible on a reducer level) as an optional enhancer - we only have gotten very few performance complaints in the first place. Maybe it would be okay to ship something that is not on by default and tell people "you can also activate this enhancer".

@markerikson
Copy link
Collaborator

markerikson commented Aug 22, 2022

Ok, the low/high-pri thing sorta makes sense.

Maybe going off into the weeds here, but could the setTimeout be replaced with queueMicroTask + a if (shouldActuallyDoTheWork) check?

I think I missed that this PR was trying to apply the batching behavior to pending checks also. Lemme take a look at that.

That said, I'm still not convinced that the change to those rejected actions really qualifies as "breaking". I don't think we have that particular aspect of our behavior formally documented anywhere, and I don't see it to be meant as part of our public API other than the fact that actions show up in the DevTools. It's the difference between "this behavior is an observable part of the implementation", vs "this is how the user is intended to interact with the API". (I know I've commented in the past about not liking those actions showing up in the first place, or actions along those lines - can't remember if it was these "cache entry exists, subscribe anyway" actions I was thinking of at the time.)

I'll have to think about this one some more.

Still not happy with the options of "add this by default and everyone pays for the bundle size" vs "don't add it by default, and RTKQ users need to know to add this to get better perf".

@phryneas
Copy link
Member Author

Ironically, my proof of concept had been applied to almost all actions - except for the condition rejection, as that one did not accept a meta.
I have hacked it in for now so we can experiment around a bit with this.

I've also experimented with the test you added for this case - the comparison numbers are comparisons to "nothing before", not to your variant. As you see, the number of subscription triggers goes down massively this way as well, probably with a similar performance impact.

If you still have those measuring scenarios around that you did with the other solution - could you maybe play around with it a bit when you have time?

@@ -85,6 +86,12 @@ function updateMutationSubstateIfExists(
}

const initialState = {} as any
const prepareAutoBatched =
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought: We could actually export this helper.

@markerikson
Copy link
Collaborator

Superseded by #2846 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants