-
Notifications
You must be signed in to change notification settings - Fork 47.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dancing between state and effects - a real-world use case #15240
Comments
What do you think of this approach (codesandbox), based on your "Option 2" implementation? It has the following characteristics:
The main reason that this is possible is that the let onSave = useCallback(
item => {
setItems(items.map(it => (it.id === editingId ? item : it)));
// start any async stuff here
},
[items, editingId]
); The main problem with this (that you probably noticed) is that There are two ways to fix this:
The second approach is what I took. The idea is straightforward: the only Any async logic can go after the It's possible that this example is missing some of the complexity that prevents you from using this approach (or a similar approach). In that case, I think it would be helpful to figure out what those details are. They should give a better idea of the "right" path forward for supporting the cases that |
(This will be a quick reply, I gotta run soon) I had thought of that but forgot to mention in my post. It is an option if all you need is |
@Nathan-Fenner |
With option two you can actually save the ref of the callback instead of the individual states. So the callback always refers to all latest states and the ergonomics is slightly nicer. Here I introduced useStableCallback - https://codesandbox.io/s/k0zqrkrojo It still suffers from the problem of losing the first save if you call it twice, but I reckon you could combine this and useReducer |
Are there any drawbacks to functional updates for state? let onSave = useCallback(
item => {
setItems(currentItems => currentItems.map(it => (it.id === item.id ? item : it)));
},
[]
); |
That's true, I think I remember seeing this now. It's just as valid a solution as option 2. The only difference I can think of is the behavior when you are doing async work in the callback. Later in my post I mentioned that my real callback looks something like this: let onSave = useCallback(item => {
setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
// save to server
let changes = await post('/save-item', { item })
setItems(applyChanges(latestItems.current, changes))
}, []); I need to act on the result from the server and apply changes to local state from it. Think of adding a new item - a common pattern is to assign it a temporary id, and the server rewrites the id to a real one and returns the real id to you, which you need to update locally. My code will reference the latest items, whereas as callback from However, you can solve this by using functional updates: let onSave = useStableCallback(item => {
setItems(items => items.map(it => (it.id === item.id ? item : it)));
let changes = await post(...)
setItems(items => applyChanges(changes, items))
}); This also has the benefit of never overwriting updates, even if they fired off sequentially. You get the same behavior as dispatching multiple actions at once with a reducer. I'm curious if the React team (or anybody) sees anything wrong with the above solution. If not, it seems like something like |
The problem isn't updating state, the options are trying to solve this problem: "We need our side effect to run in the commit phase." We need to run side effects in addition to updating the state and make sure they stay in sync. As mentioned above functional updates do help with not losing updates but doesn't help with effects. |
If you're already using reducers, the answer is relatively straight-forward with the same technique, and you don't need let onSave = useCallback(item => {
dispatch({ type: 'save-item', item })
let changes = await post('/save-item', { item })
dispatch({ type: 'apply-updates', changes })
}, []); For my use case I think driving side effects via callbacks works well. Not sure if there is another strong use case for driving them via state changes themselves (meaning the ability to emit effects from reducers). |
Actually, there is a problem (thinking out loud here, sorry for the multiple posts). Just remembered that in my real app, saving is actually like this: function onSave(item) {
setItems(items.map(it => (it.id === item.id ? item : it)));
let diff = getDiff(items.find(it => it.id === item.id), item)
let changes = await post('/save-item', { item })
setItems(applyChanges(changes, items))
} It diffs the new item with the existing one and only sends off changed fields to the server. So it needs access to the current items, so the functional updates aren't enough. Option 2 or What could happen is If reducers has a way to emit events, the advantage is you get a way to always be in sync with the existing state and have full access to it, knowing that it's always up-to-date. Not sure if that's a good route to go though. |
Also see the section "Another small use case for triggering effects". I need access to the current state to derive a new value and then fire an effect. It's a clearer use case for the need to wait for a state change to happen to then do something. |
Just on the point of maintaining callback identity for synchronous state modifiers, I think my library const methods = state => ({
saveItem(item) {
Object.assign(state.items.find(it => it.id === item.id), item);
},
editItem(id) {
state.editingId = id;
}
});
const initialState = {
items: itemData,
editingId: null
};
let [state, { saveItem, editItem }] = useMethods(methods, initialState); Note how As for the best way to handle the async/sync interplay aspect, it's a great question and I'm glad you're asking it! |
I've been working on a library called tiny-atom for managing state/effects in React. It's implementation and usage became nicer with hooks. I wanted to share it here since I think it does solve your specific problem, and maybe it can be useful when thinking about other, possibly "closer to the metal" solutions. Also, just looking for feedback and for people to poke holes. The goal with tiny-atom was always collocating state/effect logic, the way you're describing it in Option 2. Normally, tiny-atom was all about global state management (ala redux), but I've been recently thinking about adding a Example: https://codesandbox.io/s/71lrr5rwq const [state, { onEdit, onSave }] = useLocalAtom(() => ({
state: {
items: itemData,
editingId: null
},
onEdit({ get, set }, editingId) {
set({ editingId });
},
async onSave({ get, set }, item) {
set({
items: get().items.map(it => (it.id === item.id ? item : it)),
editingId: null
});
await new Promise(resolve => setTimeout(resolve, 1000));
item = { ...item, name: item.name + " simulating server update" };
set({ items: get().items.map(it => (it.id === item.id ? item : it)) });
}
})); It's a bit like using The implementation of the library itself is fairly small: I do think there are potentially some issues with concurrent mode in the way my hooks are implemented, but there isn't much information about concurrent mode, so I haven't been worrying too much about it. |
This isn't really true, as you can get the items by passing a function to setItems. You could trigger side effects from there as well and it has the benefit of not relying on mutation. let onSave = useCallback(
item => {
setItems(items => items.map(it => (it.id === item.id ? item : it)));
},
[]
); |
@ntkoopman You cannot perform side effects in that callback, it must be a pure function that returns new state. The thing we are missing is the second callback that |
I think this is actually quite old (and yet not fully solved) problem. The gist is:
Now, before continuing, here is one important thing to realize: Ok, lets say we are willing to push the state up. Therefore we need something more powerful with an access to const [state, setState] = React.useState(initialState);
const stateRef = React.useRef(state);
stateRef.current = state;
const dispatch = React.useCallback(fn => {
ReactDom.unstable_batchedUpdates(() => {
fn({ state: stateRef.current, setState, xhr, dispatch });
});
}, [])
// Usage
dispatch(({ state, setState, dispatch }) => { /* shoot yourself in the foot */ }); But in all these cases the usage is unidiomatic, tricky and often dangerous (less dangerous in redux, because the state is synchronous and isolated).
dispatch(({ state, setState, dispatch }) => {
// Extract required state here
const {
saving,
type,
} = state;
// Bail out based on current state
if (saving) return;
// Update state
setState(state => ({
...state,
saving: true,
}));
// "state" is obsolete from this point
// Trigger effect
xhr.post('/api')
.send({
type, // use current state
})
.then(() => {
// Chain another dispatch (it has an access to fresh state)
dispatch(action)
})
}) It's quite hard to come up with API which has the same power, is safe and is easy to use at the same time. I recommend this blogpost, there is a ton of interesting references. |
@urugator why not |
@KidkArolis Well yes, it allows you to obtain fresh state inbetween the async calls without the need to
Just to clarify. I don't suggest it as a solution. I am just confirming that the problem exists, the use cases are real (and common actually), that it's not a hook specific problem and that there isn't a simple bulletproof solution (yet). |
Why? The documentation does not mention this. |
setState(state => {
if (state.something) {
xhr.post()
.then(() => {
// we are in future
setState(evenNewerState => {})
})
}
// back to past
return newState;
})
|
It becomes idiomatic if you use it enough. I'm not particularly defending this, but it doesn't seem very difficult to understand to me. async function commitChanges(previous, item) {
let diff = getDiff(previous, item);
let changes = await post('/save-item', diff);
setItems(items => applyChanges(changes, items));
}
function onSave(item) {
setItems(items => {
let previous = items.find(it => it.id === item.id);
commitChanges(previous, item);
return items.map(it => (it == previous ? item : it));
})
} The second one is not really a problem. If you are calling setState synchronously from setState you can refactor the code. Since setState is local to your component this should not be too hard. |
I don't think it will. The updater function is defined as a synchronous function which takes state and returns a new one. Which makes it easy to reason about, compose, test etc.
If you move state up, you also has to move all these functions up, so that they have an access to |
@jlongster, you wrote:
You also mentioned that the the changes to the saved item could come from the server. Here it sounds like the posts to the API must be done sequentially, i.e. a second post should not be allowed until after the first post has completed. If this is the case, then maybe some other method of effect coordination is required instead of component lifecycle? |
@wmaurer In my scenario, the server is guaranteed to reply in the order that requests come in. So it's totally fine for a second request to fire off before the first one completes. It's safe to always fire a save whenever the state of a field changes. (And if it weren't, this is still a good use case in my opinion as this is a simpler scenario which others will want to do) Changes from the server only propagate some internal fields. It keeps the local data fully consistent with the server which is only important if the user does some advanced stuff after editing specific fields. It's fine for this to update whenever it comes back from the server, regardless of what the user is doing. |
@jlongster thanks for your explanation Also, many thanks for this issue and your detailed explanation of the challenges involved with this problem. I've been 'following' React for a while but never did anything with it until a few weeks ago, and I started with hooks straight away. I find this problem intriguing because pretty much every web app I write has at least a few components with a similar level of complexity in the 'dancing between state and effects'. I think I've managed an implementation that satisfies the requirements you've documented. Whether one finds it palatable depends on whether one likes RxJS or not ;-) Version 1: Version 2, with a sprinkling of @pelotom's I'm still very much interested to see an elegant implementation using 'standard' React hooks. |
@urugator for the problem of reading the state after an action has been dispatched, you could define a const nextState = useNext(state);
useEffect(() => {
fetchStuff(state);
}, []);
async function fetchStuff(state) {
dispatch({ type: 'START_LOADING' });
let data = await xhr.post('/api/data');
dispatch({ type: 'RECEIVE_DATA', data });
state = await nextState();
if (!state.needsMoreData) return;
data = await xhr.post('/api/more-data');
dispatch({ type: 'RECEIVE_MORE_DATA', data });
}
// ...
function useNext(value) {
const valueRef = useRef(value);
const resolvesRef = useRef([]);
useEffect(() => {
if (valueRef.current !== value) {
for (const resolve of resolvesRef.current) {
resolve(value);
}
resolvesRef.current = [];
}
valueRef.current = value;
}, [value]);
return () =>
new Promise(resolve => {
resolvesRef.current.push(resolve);
});
} EDIT
|
Actually you always have to force component update and await state before reading it (even the initial state), otherwise there is no guarantee that it's actually up to date. Consider:
It's obvious here, but once you start decoupling these calls from components and one another, it becomes hard to reason about. I think it's basically a hook variant of: function nextState() {
return new Promise(resolve => setState(null, () => resolve(this.state))),
} I tried this approach (before hooks) and to me it wasn't generally usable. My conclusion basically is that you really want to keep everything synchronous to the last possible moment. |
I have no idea how this fails with Suspense, but how about something like this: https://codesandbox.io/embed/ry40mp49rq, which builds off of https://twitter.com/DavidKPiano/status/1121700680291553281 to merge effects. |
if you use hook's setState methods like this to access the current state, will it be ok or problematic? Thanks! |
@jlongster a bit late but that was an interesting read! thanks for sharing! I'd like to share some personal thoughts add item use caseI think this code may have some issues let onSave = useCallback((fields, { add }) => {
// In my real app, applying the changes to the current item is a bit more complicated than this,
// so it's not an option to separate on an `onAdd` function that duplicates this logic
setNewItem({ ...latestNewItem.current, ...fields });
// This action also should add, mark the effect to be run
if(add) {
shouldAdd.current = true;
}
}, []) If multiple
But that means state updates will run on a single phase and the same happens for I think the issue reflects the way batching undermines the assumption that every state update will get its corresponding scheduling effects after
|
We've posted an RFC that is relevant to (at least some of) this discussion. Would appreciate your input: reactjs/rfcs#220 |
I started this as a gist but Dan mentioned that this would be a good discussion issue so here goes. I've been writing with and refactoring code into hooks for a while now. For 95% of code, they are great and very straight-forward once you get the basic idea. There are still a few more complex cases where I struggle with the right answer though. This is an attempt to explain them.
The use case
This is a real-world use case from an app I'm building: interacting with a list items. I've simplified the examples into codesandboxes though to illustrate the basic idea.
Here's the first one: https://codesandbox.io/s/lx55q0v3qz. It renders a list of items, and if you click on any of them, an editable input will appear to edit it (it doesn't save yet). The colored box on the right will change whenever an item rerenders.
If you click around in the items, you can see that when changing the edited item, all items rerender. But the
Row
component is wrapped withReact.memo
! They all rerender because theonEdit
is new each time the app renders, causing all items to rerender.Maintaining callback identity
We want
onEdit
to be same function for all future renders. In this case, it's easy because it doesn't depend on anything. We can simply wrap it inuseCallback
with an empty dependencies array:Now, you can see clicking around only rerenders the necessary items (only those colors change): https://codesandbox.io/s/k33klz68yr
Implementing saving
We're missing a crucial feature: after editing an item, on blur it should save the value. In my app the way the "save" event gets triggered is different, but doing it on blur is fine here.
To do this, we create an
onSave
callback in the app and pass it down to each item, which calls it on blur with the new value.onSave
takes a new item and updates the items array with the new item and sets theitems
state.Here is it running: https://codesandbox.io/s/yvl79qj5vj
You'll notice that all items are rerendering again when saving. The list rerenders twice when you click another item: first when you click down and the input loses focus, and then again to edit a different item. So all the colors change once, and then only the editing rows color changes again.
The reason all of them are rerendering is because
onSave
is now a new callback every render. But we can't fix it with the same technique asonEdit
because it depends onitems
- so we have to create a new callback which closes overitems
otherwise you'd lose previous edits. This is the "callbacks are recreated too many times" problem with hooks.One solution is to switch to
useReducer
. Here's that implementation:https://codesandbox.io/s/nrq5y77kj0
Note that I still wrap up the reducer into
onEdit
andonSave
callbacks that are passed down to the row. I find passing callbacks to be clearer in most cases, and works with any components in the ecosystem that already expect callbacks. We can simply useuseCallback
with no dependencies though sincedispatch
is always the same.Note how that even when saving an item, only the necessary rows rerender.
The difference between event handlers and dispatch
There's a problem though. This works with a simple demo, but in my real app
onSave
both optimistically updates local state and saves it off to the server. It does a side effect.It's something like this:
There's a big difference between the phase when an event handler and dispatch is run. Event handlers are run whenever they are triggered (naturally) but the dispatching the action (running
reducer
) happens when rendering. The reducer must be pure because of this.Here's the reducer from https://codesandbox.io/s/nrq5y77kj0:
How is
save-item
also supposed to trigger a side effect? First, item's important to understand these 3 phases:Events are run in the first phase, which causes a render (when dispatches happen), and when everything is finally ready to be flushed to the DOM it does it in a "commit" phase, which is when all effects are run (more or less).
We need our side effect to run in the commit phase.
Option 1
One option is to use a ref to "mark" the saving effect to be run. Here's the code: https://codesandbox.io/s/m5xrrm4ym8
Basically you create a flag as a ref:
Luckily, we've already wrapped the save dispatch into an event handler. Inside
onSave
we mark this flag as true. We can't do it inside of the reducer because it must be pure:Finally, we define an effect that always runs after rendering and checks the flag and resets it:
I thought this option was going to work, but just ran into this issue. We don't know what to save anymore. We certainly don't want to send the entire items array to the server! I suppose we could store the item in the ref, but what happens if multiple events are fired before the effect runs? I suppose you could store an array, but... do we really need that?
Option 2
Note: I just noticed this option is documented in How to read an often-changing value from useCallback?, but I disagree with the tone used. I think this is a fine pattern an better in many cases than
dispatch
, even if it's not quite as robust. Especially since it's not as powerful as callbacks. (see end of this section)Keeping around all of the data we need to do the effect might work in some cases, but it feels a little clunky. If we could "queue up" effect from the reducer, that would work, but we can't do that. Instead, another option is to embrace callbacks.
Going back to the version which used a naive
onSave
which forced all items to rerender (https://codesandbox.io/s/yvl79qj5vj),onSave
looks like this:The core problem is that it depends on items. We need to recreate
onSave
because it closes overitems
. But what if it didn't close over it? Instead, let's create a ref:And an effect which keeps it up-to-date with items:
Now, the
onSave
callback can reference the ref to always get the up-to-date items. Which means we can memoize it withuseCallback
:We are intentionally opting to always referencing the latest item. The biggest change with hooks in my opinion is that they are safe by default: an async function will always reference the exact same state that existed at the time they were called. Classes operate the other way: you access state from this.state which can be mutated between async work. Sometimes you want that though so you can maintain callback identity.
Here is the running sandbox for it: https://codesandbox.io/s/0129jop840. Notice how you can edit items and only the necessary rows rerender, even though it updates
items
. Now, we can do anything we want in our callback, like posting to a server:Basically, if all you need is the latest data since last commit, callbacks can be memoized as well as reducers. The drawback is that you need to put each piece of data you need in a ref. If you have lots of pieces of data and only a few simple effects, reducers would be better, but in my case (and I suspect in many others) it's easier to use callbacks with refs.
It's nice too because in my real app the save process is more complicated. It needs to get changes back from the server and apply them locally as well, so it looks more like this:
Maintainability-wise, it's really nice to see this whole flow here in one place. Breakin this up to try to manually queue up effects and do a dance with
useReducer
feels much more convoluted.Option 3
I suppose another option would be to try to "mark" the effect to be run in state itself. That way you could do it in
useReducer
as well, and it would look something like this:And an effect would check the
itemsToSave
state and save them off. The problem is resetting that state - the effect would have to change state, causing a useless rerender, and it's not determistic to make sure that the effect does not run multiple times beforeitemsToSave
gets reset.In my experience mixing effects into state, causing renders, make things a lot more difficult to maintain and debug.
What's the difference between Option 1 and 2?
Is there a crucial difference between 1 and 2? Yes, but I'd argue it's not a big deal if you can accept it. Remember these three phases:
The big difference is option 1 is doing the side effect in the commit phase, and option 2 is doing it in the event handler phase. Why does this matter?
If, for some reason, an item called
onSave
multiple times before the next commit phase happened, option 1 is more robust. A reducer will "queue up" the actions and run them in order, updating state in between them, so if you did:which runs the callback twice immediately, the reducer will process the first save and update the items, and process the second save passing in the already updated state.
However, with option 2, when processing the second save the commit phase hasn't been run yet so the
latestItems
ref hasn't been updated yet. The first save will be lost.However, the ergonomics of option 2 is much better for many use cases, and I think it's fine to weight these benefits and never need the ability to handle such quick updates. Although concurrent mode might introduce some interesting arguments against that.
Another small use case for triggering effects
In case this wasn't already long enough, there's a similar use case I'll describe quickly. You can also add new items to the list by editing data in an empty row, and the state of this "new item" is tracked separately. "Saving" this item doesn't touch the backend, but simply updates the local state, and separate explicit "add" action is needed to add it to the list.
The hard part is that there is a keybinding for adding the item to the list while editing - something like alt+enter. The problem is I want to coordinate with the state change, so first I want to save the existing field and then add to the list. The saving process is complicated so it need to run through that first (I can't just duplicate it all in
onAdd
).This isn't a problem specific to hooks, it's just about coordinating with state changes. When I was working with reducers, I had though that something like this would be neat. Basically when the new items detect that you want to save + add it first an action like
{ type: 'save-item', fields: { name: 'Foo' }, shouldAdd: true }
where
shouldAdd
is a ref that is checked on commit phase and saves the item off to the server. This isn't possible though.Another option would be for the item to call
onAdd
instead ofonSave
when saving + adding, and you could manually call the reducer yourself to process the changes:This is kind of a neat trick: you are manually running the reducer to get the updated state, and React will run the reducer again whenever it wants.
Since I ended up liking callbacks for my original problems, I ended up going with a similar approach where I have a ref flag that I just set in
onSave
:Conclusions
Sorry for the length of this. I figure I'd be over-detailed rather than under-detailed, and I've been brewing these thoughts since hooks came out. I'll try to conclude my thoughts here:
Effects are very nice. It feels like we have easy access to the "commit" phase of React, whereas previously it was all in
componentDidUpdate
and not composable at all. Now it's super easy to throw on code to the commit phase which makes coordinating stuff with state easier.Reducers have interesting properties, and I can see how they are fully robust in a concurrent world, but for many cases they are too limited. The ergonomics of implementing many effect-ful workflows with them requires an awkward dance, kind of like when you try to track effect states in local state and split up workflows. Keeping a linear workflow in a callback is not only nice, but necessary in many cases for maintainability.
Callbacks can be made memoizable without much work. In many cases I think it's easier to use the ref trick than reducers, but the question is: just how dangerous is it? Right now it's not that dangerous, but maybe concurrent mode really is going to break it.
If that's the case, we should figure out a better way to weave together effects and state changes.
I hope all of this made sense. Let me know if something is unclear and I'll try to fix it.
The text was updated successfully, but these errors were encountered: