-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
queryCache module-wide variables leaking memory on the server side #754
queryCache module-wide variables leaking memory on the server side #754
Comments
Hi @fabiosantoscode. Does your testing app recreate containers on each request using |
Hello! Yes it does. Since the app is wrapped in react-router and On 07:42, Wed, 20 Jan 2016 Denis Nedelyaev notifications@github.com wrote:
|
Hi, I'm facing the same behavior in our app. I could try change how the react-router-relay create the routers if this makes more sense, but don't seem to be a simple change after a quick look. |
@rodrigopr Does your app also redefine all the containers in every http request (by calling We would consider adding some way to reset these caches, but I suspect that doing so in this case would mask a larger performance problem in your app. |
I'm afraid react-router-relay and isomorphic-relay creates an instance of |
@rodrigopr Hmm. Those packages may be somehow redefining containers, but there isn't anything immediately obvious in the sections you linked to. I would recommend adding a breakpoint in |
@josephsavona thanks for the quick feedback. I'll debug it further on that line, but if I understood correctly the I'll also investigate how we can avoid creating a QueryAggregator instance per server-side render, but probably will require a big change in react-router-relay. |
We ran into this problem after putting a server-rendering relay app into production. Our app would crash every few hours. I forked relay to disable these queryCache and componentCaches and it appears to fix our problem. Deployed at 10am, the memory appears to have stabilized: |
Thanks for the feedback. Ideally we'd address the root cause - not repeatedly creating things that look like containers but aren't - but we're open to a PR that addresses this in Relay for now. This could be a global, public function |
@tomconroy Are you using Ideally there should be a way for |
Indeed, moving this over to denvned/isomorphic-relay-router#40 |
Yes! This seems like the ideal solution to this issue. |
@josephsavona I've been playing with this and I'm going to comment over on denvned/isomorphic-relay-router#40, but can you confirm for me that |
Yup, it expects a class. |
@josephsavona Thanks! Any chance of letting it accept either; i.e., check if Component is a function, and if not, use Component.constructor as the key? The reason I ask is I don't think caching any sort of query config aggregator is going to work for SSR. Otherwise you're bound to have concurrency problems. And any sort of query config aggregator is likely going to have to fake the I'd be a little concerned with an 'accept an instance or a class' solution that there are other internal portions of Relay that rely on the Relay.Renderer Container prop being a class and not an instance. On the other hand, routing and nested routes seems pretty important, and having lots of nested Relay.Renderers sounds bad to me (but maybe I'm wrong on that). So a query aggregator is pretty much a given. |
I don't think this solves the problem, since the constructor would presumably be shared amongst all I believe that the issue is that a new QueryAggregator is created for each new request. Instead, it should be possible to create one aggregator per unique route combination. Fo example if I'd really recommend submitting a PR to allow manually disabling the cache. If you are using |
Sorry to drag this convo out here, but I think it's relevant to anyone dealing with query aggregation and how to properly implement it with Relay, and not just iso-relay, react-relay-router, etc.
This brings up a concern with aggregation generally. From your perspective, is this problematic with how Relay does things internally?
My concern is whether anything async is happening between the time one request initiates use of the Or as you suggest, just enable turning off the cache on the server. Definitely the simplest solution. Perfectly happy to work on it and submit a PR. However, based on your reply, I'd like to verify that there isn't a problem with how |
@josephsavona The problem was that react-router-relay, on which isomorphic-relay-router is based, is designed in such a way that there supposed to be an unique instance of But I believe I finally managed to solve the issue by refactoring the container responsibility of |
I think @denvned's solution fixes the issue with routing and query aggregation, and in a quick test, the cache seems to be building as intended by Relay on both the client and server. That being said, I think we're still going to need to a way to manually disable Relay's caches on the server as @josephsavona suggested. This is completely independent of routing. Relay caches use the params in generating cache keys as well, which would quickly grow out of hand on the server (as @josephsavona stated, the cache was never intended for server use). Just imagine how many cache keys would be generated for a Component if something like a text search string were a param, or even just the ids of nodes for large applications. |
Submitted a very simple PR that addresses the above issue and allows for disabling the query caches with |
Or alternatively to disabling cache altogether this can be solved by incorporating LRU cache. This way we would not introduce new public API, also cache still might provide some performance benefit on the server. |
I think an LRU would be a far more elegant solution for sure. It would also fix the issue client side--even though the likelihood of encountering the problem during normal usage is low, it still bothers me that it's possible. |
Unless this serverside LRU cache can be turned off, I can personally say it is still an issue for me. Serverside users of Relay may not always want an in-process cache. In fact, it can be harmful to reactive architectures in situations where constructing a fresh version of the HTML is crucial. |
@fabiosantoscode I don't quite understand what you mean. I think a combination of a container and a query configuration, which is used as the cache key, always yields the same set of relay queries, thus should be safe for caching (in case when the cache doesn't grow unboundedly). |
Thanks for the clarification, I seemed to have misread and thought an in-process cache for query responses was being considered. I will keep quiet now :) |
@josephsavona Any thoughts on using LRU cache for these two caches? Would you be open to a PR that implemented something like a If so, I'm willing to take a stab at it in the nearish future (I'd have to make time), or maybe @denvned would be willing (he's cleverer than I am anyways). |
Summary: This allows for disabling the query cache in `createRelayQuery` and `buildRQL` by calling `Relay.disableQueryCache()`. Submitted in response to #754 (comment) Because keys used to store queries include params, the query caches can grow to an unbounded size in certain circumstances (like text search input as a param). This allows the the cache to be disabled manually in circumstances where that is problematic, such as server side rendering. This PR does not disable the `fragmentCache` in `buildRQL`, which I believe only uses fragments and initial variables, and so has a bounded upper limit in terms of size. Closes #1302 Reviewed By: josephsavona Differential Revision: D3792224 Pulled By: wincent fbshipit-source-id: 9efea99b5422cc2f9ba798f140911d90e1c6842e
After an update to 0.9.3, which contains the above commit, it seems like our memory leak is completely gone. Further investigation will occur on my side, but my guess is that this issue can be closed. |
@fabiomcosta Thanks for confirming. I'm going to close as the use of this feature completely disables the caching that caused the bug - feel free to file a new issue if there are any remaining problems! |
I think the query cache should be an LRU, or otherwise customizable (perhaps with continuation local context to be request-scoped). Without some sort of caching, it's impossible to avoid doing extra work – the queries need to be built to fetch data, but then |
Or – would it be possible to move |
Using isomorphic-relay to get relay to work on the server, and using a load testing tool to create repeated requests and find a memory leak in an application I'm building, I observed that 2 maps called queryCache were growing unbounded.
These maps are in getRelayQueries.js, and in buildRQL.js, and they grow to thousands of keys over 600 requests to a page I have with no less than 2 containers.
Longer explanation in:
#683 (comment)
I'll do my best to provide an example if it's necessary to reproduce the issue.
The text was updated successfully, but these errors were encountered: