-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best practice when a mutation deletes an object #5
Comments
Based on some of the comments on the end of the original issue, it seems like a good approach is to flag deleted records and then omit them in your UI. Another clever approach from @maierson involved marking queries for refetch rather than refetching them immediately. Are either of these approaches to the problem something that would be considered for inclusion in Apollo itself? |
I am having trouble understanding why this problem can be solved by the library's users reengineering their code to respect a Can someone explain what I'm missing here? |
I have a Right now, the only way I can use the I also have a Any tips? |
I must say, coming from redux, that's the most painful part so far. I guess this will be fixed with 3.0 |
I've read the entire discussion in apollographql/apollo-client#899. The recommended approaches are not acceptable for a number of reasons:
The recommended invalidation methods are kludges that don't fit into the layer that they're required for. Invalidation is a low-level concern, but queries and mutations are UI concerns, yet the code has to be commingled. The cache invalidation problem points to a disconnect in the Apollo universe. Apollo's caching system is aware of object identity, but has no notion of object relationships. |
The real problem is that Apollo Cache has no cache eviction. Is that still the case? The lack of it is effectively a memory leak that needs to be handled periodically by invalidating the entire cache and refetching. |
That is also true. I don't think that's the "real problem", but it is a problem. |
The reason I say it's the "real problem" is that if Apollo had cache eviction then this would be automated inside Apollo Cache. I'm thinking something along the lines of So if you remove a user from These are hard problems (having worked on an immutable caching solution myself) and require a significant amount of tracking metadata attached to the entities inside the cache, not to mention batching updates etc but it's doable. I feel it's a must if Apollo Cache wants to reach full maturity for large scale (enterprise grade) applications. Apollo cache already has some pretty gnarly performance issues (due to normalization) that we had to overcome by putting our hierarchical entities de-normalized in Redux after fetching and using from there during reconciliation. |
I am just encountering this issue and am wondering if the following solution will work for me. However I'm relatively new to graphql and apollo, so wondering if others can provide some feedback on thinking this through/has tried something similar and can offer some guidance:
I can see that maybe in the case I delete a user that I've queried by id (or some other specific parameters) this might break? Also the 'removeUserFromGroupResponse' might need a 'group' field, e.g. if I am deleting (or adding) from a 'group' component. Has anyone tried a similar approach that can offer some guidance? How might this scale? |
@mchong-teal As I understand it, this only works for deletions of objects are that are referenced as child objects by other objects, not for top-level objects. For example, let's say groups have users; i.e. But let's say your request is a mutation called There may be a workaround. I believe you could define a "dummy" singleton object that has an ID and always has all groups as its children. Normally a query to find groups would be something like: query {
groups { id, name }
} Instead of this, you'd do something like: query {
allGroups {
id
groups { id, name, ... }
}
} Now a The downside is that every time you delete a group, you have to return the entire group list to the client. For larger datasets, this may not be practical. The dummy singleton object also feels pretty unnatural. |
@atombender, great, thanks for that. I can see now how this would be an issue for larger datasets, and the singleton object does seem a bit strange. I think for now I'm not going to bump up against scaling issues, but will keep following the developments on this thread. Thanks again. |
Hey Guys just joining this discussion after recently diving into Apollo and GraphQL. Is there an update for this feature request elsewhere or a better solution to handling delete mutations w/ cache updates other than the three options @atombender mentioned above? |
@hwillson is there a solution for this? I would expect you would just write null to the cache for the e.g.
|
@lifeiscontent The Apollo Client 3
This will all be better documented shortly. |
@hwillson I spent time building out a real world (A Medium.com clone) example using Apollo. https://github.com/lifeiscontent/realworld There were are still a couple of issues in the repo, specifically when favoriting an article and unfavoriting an article. in the case of unfavoriting (from the favorites list) it was pretty straight forward, but in cases where favorites are being added on a page that doesn't manage the list, I don't see a clear way of updating the cache, do you guys have a solution to solving this kind of problem? |
I'm playing with
Invariant Violation: Dangling reference to missing User:3 object |
The same happens with const removedMatchId = mutationData.removeMatch.id;
const cacheId = cache.identify({
__typename: 'Match',
id: removedMatchId,
});
if (!cacheId) {
throw new Error('Cannot generate cacheId');
}
cache.modify(cacheId, (_, {DELETE}) => DELETE); |
Just to be clear, the following works very well and doesn't require to refetch the query: update(cache, {data: mutationData}) {
if (mutationData) {
const removedMatchId = mutationData.removeMatch;
cache.modify('ROOT_QUERY', {
matches: (matches: Reference[], helpers) => {
const removedMatchRef = helpers.toReference({
__typename: 'Match',
id: removedMatchId,
});
return matches.filter(({__ref}) => __ref !== removedMatchRef.__ref);
},
});
}
}, This is a big improvement compared to The problem with this approach is that it won't refetch those queries until you re-use them, so if any of those is already in an active view the user will miss the update. |
I had the same issues with redux and the way I combated the "developers forgetting" part was to evict the item itself. Then every reference to the deleted item from any query, would just return This created memory issues though, since my cache/store always had references to items that weren't existent in the cache. I don't know if it's possible to follow a similar path, but in order to combat what you said, the only way is for Apollo to create a way to:
I don't understand why a refetch would be needed. You just need to clean them up of stale references. Am I missing something here? |
The "delete-all-references" approach would probably work in 99% of the cases, but I would still make it optional for the following reason: let's say that you made a forum application and a user asks to be deleted, but you don't want to delete all of his posts. You clearly cannot delete all posts with the author field pointing to the deleted user. Then what should it point to? What if our backend decides that it should point to a default user? Or maybe it simply returns null? The client cannot know and thus has to refetch. On the contrary, if you're sure that deleting an item forces the deletion of all the related entities, then you can safely remove every associated reference. |
Hmm.. I think that this scenario is a bit advanced and would warrant either some custom logic or even a page refresh. It's a bit complicated to handle through code without losing track of what to update. I understand what you mean though and I do agree that what I proposed should be optional. Essentially, what I'm proposing is a |
I also think that we need an |
It's not documented, but you already have access to that in the export declare class InMemoryCache extends ApolloCache<NormalizedCacheObject> {
// ...
modify(dataId: string, modifiers: Modifier<any> | Modifiers, optimistic?: boolean): boolean;
// ...
} guess you can use that instead of |
@3nvi the |
@hwillson if we |
AFAIK the query gets invalidated and refetched. |
To everyone subscribed: this has probably gone unnoticed but one of the best features of Apollo 3 just got removed: apollographql/apollo-client#6289 |
Does apollographql/apollo-client#6350 help you out? |
and if the item is in a list, |
that |
I can't seem to get a singleton object to get evicted when deleting or adding a new entry. Use case is as follows: {
viewer {
productsConnection(filters: ..., paginationOptions: ...) {
totalCount
edges { ... }
}
}
} When I {add|remove} a product, I want to invalidate I've tried I've also tried Maybe this is because of the arguments that are always present for the field? Do I need to do something special to tell the cache that I want to access the field no matter what arguments it has been executed with? |
I'm trying to use evict({ id }) but it's not clear to me what the id should be. Can anyone point me to the documentation on that? |
Thanks @lorensr. I'm not sure how the comment you linked to relates to my question though. My issue is that I need to evict From what I understand, the comment you linked only explains how to evict fields of the |
@bkoltai I see, you could try a nested No |
I found an approach that is concise and suits my needs is creating a special type schema.graphql
resolvers.ts
Using this "special type" allows me to react to incoming: client.graphql
client.ts
This is basically the same as doing the usual, but avoiding having to repeat
Overall, this approach is efficient, straightforward, and helps maintain a consistent state in the cache after a item is deleted. It provides a clear separation of concerns between the mutation resolver and the cache update logic. |
Migrated from: apollographql/apollo-client#899
The text was updated successfully, but these errors were encountered: