-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support batch requests in data layer #26024
Conversation
Size Change: +306 B (0%) Total Size: 1.19 MB
ℹ️ View Unchanged
|
How should we handle when the batch response returns a top-level error? Ie not one where each request has a separate error. This wouldn't be sourced from the batch controller itself, but could be generated by other code. For me it seems like the actual batching shouldn't happen at the Right now the state of that is just in a global variable, but I think it might be able to actually live in the store so we could inspect the state of the batching specifically. I took a stab at that here: master...TimothyBJacobs:try/data-batch-requests It is working for me against |
Hey 👋 ! I took a look at #25096 (@TimothyBJacobs ) and I wanted to know a bit more about the reasoning of this. If I understand correctly, you created an endpoint to validate the requests and if the input is okay, then you have to make another request to commit the changes (ex update something)? If that's the case, shouldn't the validation be in the respective endpoint (ex update something)? Also can you point me to where such validation happens? Does it call I comment in this PR because I'd like to understand how is connected to Navigation and what are the other use cases for this, both data layer and endpoint). Thanks! |
No, it is implemented on the server side as one request to the batch endpoint. If |
@TimothyBJacobs I think it's fair to assume the entire operation failed and so the global error could be used as a failed response for each partial request.
This is an interesting idea. A clear advantage is that it would prevent mixing batches from multiple registries. A disadvantage is that we'd be batching requests as they are before running all the middlewares. I wonder if there is some way to get the best of both worlds - I like the overall concept of keeping it in the state. (Side note, I noticed that in your branch the list of batches is a global variable same as in this PR)
I don't quite agree here. While I agree that some things are easier to reason about when done explicitly, I am also a fan of sensible defaults. For example having to list
Nice! Thank you for taking it for a spin! I have some ideas how to connect both approaches - I'll think about it some more and hopefully propose a third PR tomorrow. |
I think this doesn't effect us for the most part. Looking at https://github.com/WordPress/gutenberg/tree/master/packages/api-fetch/src/middlewares, the only one I think we need to account for is the
Yeah they are right now, but since it lives in the
That's true, but I think there are only two spots where this would happen in If we are still worried, we could add it as a parameter to an
I think I'm probably not entirely understanding what the Or are you saying they would call If not, then how does the API caller get informed about the status of an entire batch? |
I got caught up in #25859 today. I think I have an idea how to fit it all together, I'll propose a PR tomorrow to show you what I mean @TimothyBJacobs. |
@TimothyBJacobs I explored state-based processing in #26164 (WIP). I thought that batch processing is a pretty generic concept so there could be a generic state machinery for handling that. Supporting batch API requests would then only be a matter of integrating that with either the entity store or API_FETCH control. What do you think about this direction? also cc @youknowriad |
That looks like a great approach to me! I'm still curious about this bit to get an idea of how this will integrate with
|
I'm mainly interested in the impact this will have on callers, a |
The REST API won't support GET batch requests in v1, so something to keep in mind, but I don't think we necessarily need to solve in the first go at this. |
Just to be strict, that's a Also, I'm closing this PR in favor of #26164 which is based on redux store. |
🚧 UNDER CONSTRUCTION 🚧
Data layer support for #25096
The idea is simple: apiFetch pools related requests, sends them all as one batch, and then resolves the promises using the response of the collective batch request.
To demonstrate the idea, I just added
batchAs: ``${ kind }-${ name }``
to actions.js. In reality, this should come from the caller to distinguish between related and unrelated requests:Also,
template-parts
controller doesn't support batch requests at the moment so applying this PR will result in errors until it does. This PR is more of a launchpad for discussion than mergeable code.