-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cold Boot Performance Issues #1673
Comments
This is part of our goals with extracting adapters to https://github.com/nextauthjs/adapters. Removing @ndom91 has an ongoing effort in splitting out the adapters from the core at https://github.com/nextauthjs/adapters/issues/37, and I think he would appreciate any help!
|
This is all great stuff, thanks @balazsorban44 and glad to see it's already in progress. I will checkout https://github.com/nextauthjs/adapters/issues/37 and have a peak at |
Yeah we're hoping to be able to get the adapters out of the primary package so users who are just using jwts get the fastest version possible! Unfortunately there hasn't been a ton of progress lately due to a lack of time on our side, so if your interested we'd be happy for any and all support :) |
Some work has been laid out here just now: #1682 |
This is awesome, thanks for the heads up @balazsorban44 and your size reductions are looking awesome! Shall we close this issue and relate it to #1682? I can mention the outdated version of |
Before you folks close this - I just wanted to add that we're experiencing a very similar issue but we're not running our app on Lambda/Vercel - but in EKS and we're seeing the same behavior: The first request always takes " a while" (6-15 seconds observed). We are also seeing periodic multi second delays occasionally. It feels very similar to what @mod-flux describes in their issue but rather than a lambda function cache expiring - it seems like this is present even when lambda is not involved. We are not really using TypeORM in our application (other than next-auth using it internally). We could consider writing our own DB adapter using something like KnexJS - but what I don't understand is whether next-auth would still load TypeORM even if we use a custom DB adapter? For reference we're using:
|
Hey no so if you don't define an adapter at all, i.e. your just using JWTs and not saving user info, then next-auth won't load an adapter at all. If you define your own adapter, it'll only load that one, whatever you define in the We have some examples in the docs for writing your own adapter and are in the process of pulling them out into their own repo in |
Interesting - We are indeed using JWTs - but we are also using two Providers:
By using these two providers next-auth is interacting with our DB. I believe that it's using TypeORM to do this (as we don't specify anything to the contrary in config) I think the crux of my question above is:
Yep - I think this will likely be the route we have to take - but before we invest the time in writing an adapter I just wanted to double check that my own adapter would bypass having TypeORM. |
Currently Line 1 in 17b7898
Line 89 in 17b7898
This will hopefully be fixed in #1682, but this will require that you will have to install In case you don't rely on a database (or you will write your own adapter with a much smaller footprint), the bundle size will drastically drop. At least that is the intent. Follow the PR on the progress. |
@balazsorban44 Yeah it'll get installed and what not obviously, but is it really then also bundled and shipped to the user even if you don't use any adapter? |
checked with webpack-bundle-analyzer, it was in the production build, bundled in the api handler, yes. (see the PR, I have a screenshot before and after) |
@balazsorban44 - Thanks for the clarification. This makes sense and I think our approach will have to be to wait for #1682 and then to write our own DB adapter. Would you have a reasonable guess/noncommittal guess as to when that PR might make it into a release? |
I understand that typeorm is large, but it seems that the big 6 second initial delay is probably from connecting to the DB, not from loading up the module within the serverless context? |
@robert-moore I'm not connecting to any external DB or source in the context of my original issue and investigation. That 6 seconds is purely booting with what's on the Lambda, no external factors. |
Thank you for all these observations! I’ve just set up connection pooling today (with pgbouncer) and found that cold starts still go way slower than expected, possibly due to the overwhelmingly large size of Lambda functions that next-auth might be responsible for. |
Random observation: I noticed if I check the request for a session, and check if the session is valid and in the db myself (via pg-promise), cold-boot time goes down to 2.8 seconds for logged in users (likely due to the size of the function). The pseudocode of my hack is: if (session in req.cookies){
session = await check_session_in_db();
} else {
session = await getSession({req});
} |
So it's a bit hard to track this further down, as the issue at hand is not directly related to code. In a best effort, |
Describe the bug
I have observed extended response times for any
NextAuth
endpoint in the scenario of a cold boot of a serverless function. This is regardless of whatever method or encryption you're using (JWT/database, encrypted and non-encrypted).These performance issues are in the range of multiple seconds of response time. For context, 'cold boot' occurs in these scenarios:
Steps to reproduce
Next.js
application using the latest version.next-auth
library using the standard implementation<your deployment>/api/auth/session
for testing)~5 seconds
to respond (this is regardless of whether they have a JWT cookie to decode or a database to connect to)~100ms
Further to the above, I've noted that when 'hot', the
/session
endpoint uses131MB
of memory so to optimize serverless costs, we've adjusted ourvercel.json
for/session
to provision itself with192MB
of available memory as opposed to the default of1GB
. In this scenario, the initial requests takes upward of~9 seconds
.Expected behavior
Initial requests on cold functions don't take over a second to respond, especially for those without an active session (e.g. no JWT cookie for stateless sessions).
Screenshots or error logs
I did some profiling as to what was taking the majority of the time in the aforementioned scenarios and got the following results from a Vercel function on the
/session
endpoint using192MB
of memory:Additional context
It's clear that most functions/libraries are fairly quick and performant but the real outlier here is
typeorm
which is part ofadapters
. This is included in the first line of the server index.js.I'm not proficient with this library but at a glance, everything in
adapters
only appears to be being used when someone provides a customeradapter
in the options ordatabase
is being used to store sessions, neither of which is default tonext-auth
and need to be explicitly set as stateless sessions is the default.I tested my theory and removed the
import adapters from '../adapters'
entirely, deployed and then attempted the/session
endpoint again and found it worked. More crucially it reduced the cold start response time to a more manageable~2 seconds
. Furthermore, it reduced the memory usage from131MB
to94MB
, which meant we can reduce our memory allocation even further.To this end, I propose that it doesn't make sense to include this significant overhead in the majority of cases when users are using the
next-auth
default config of stateless sessions. May I suggest we look to dynamically import the./adapters
in theserver/index.js
only in the situations where it's necessary (e.g. user has configureddatabase
sessions).I'm also looking at other areas to optimize this initial load, specifically
jwt
androutes
but clearly this is an improvement on what it was initially.The text was updated successfully, but these errors were encountered: