Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge memory leak #2034

Closed
cyperdark opened this issue May 2, 2023 · 30 comments
Closed

Huge memory leak #2034

cyperdark opened this issue May 2, 2023 · 30 comments
Labels
❗ p4-important Priority 4: bugs that violate documented behavior, or significantly impact perf upstream issue v8 workaround

Comments

@cyperdark
Copy link

Environment

  • Operating System: Windows_NT
  • Node Version: v16.20.0
  • Nuxt Version: 3.4.3
  • Nitro Version: 2.3.3
  • Package Manager: yarn@1.22.19
  • Builder: vite
  • User Config: telemetry, ssr, modules, i18n
  • Runtime Modules: @nuxtjs/i18n@8.0.0-beta.11
  • Build Modules: -

Reproduction

Source code: https://github.com/cyperdark/i18n

Describe the bug

Memory leak video:
https://i.osuck.link/1683034408_ScxZCLgnzT.mp4

Additional context

No response

Logs

No response

@cyperdark
Copy link
Author

This is how memory graph look like in production (when memory go down it means app crashed)
image

@kazupon kazupon removed the pending triage label May 7, 2023 — with Volta.net
Copy link
Collaborator

kazupon commented May 7, 2023

Thank you for your reporting!

I think that v8.0.0-beta.11 is buggy. I am fixing on it now, but that bug may have affected it.

we would like to see if memory leaks occur in the previous version v8.0.0-beta.10 to make sure this version is affected.
If you have the time we would appreciate your help. 🙏

@kazupon kazupon added the ❗ p4-important Priority 4: bugs that violate documented behavior, or significantly impact perf label May 7, 2023 — with Volta.net
@cyperdark
Copy link
Author

cyperdark commented May 9, 2023

I tried on versions below, and until version 8.0.0-beta.7 situation is the same
BUT on this version i got errors

thats weird, need more testing
image

@cyperdark
Copy link
Author

Potentially i found what caused this leak

  1. If i use Head Title with $t, memory went to 1gb (1000 requests) then go to normal ~90
    image

  2. same situation with useLang, from 60 went to 1gb then down to 85-90
    image
    image

  3. but if i use computed title, from 60 to memory cap
    image
    image
    image

  4. Same as in 2 or 1
    image
    image

  5. Same as in 3
    image
    image

Tested only on local machine, on production will test tomorrow (need to fix a couple issues)
Also i noticed that in dev mode it'll load only used locale but in prod mode it'll load all locales

Dev:
image

Production:
image

Options:
image

@hi-reeve
Copy link

hi-reeve commented May 9, 2023

i think i got the same error, works fine with my local machine. but i got javascript heap out of memory when building the project. im using docker, the resource works fine when memory is 5gb but less than that got the javascript heap out of memory error.

@cyperdark
Copy link
Author

cyperdark commented May 10, 2023

Look like changing Head to useHead / useSeoMeta solve the leak, and i think that it's because of computed properly been used in

So far memory look normal (below 150), will see how it'll go in 1-2 days

Copy link
Collaborator

kazupon commented May 11, 2023

I greatly appreciate your work ♥️

@cyperdark
Copy link
Author

cyperdark commented May 11, 2023

look like using t() inside computed property + using it with Head was causing problems, and after moving to useHead/useSeoMeta graph look like this
the spike at the end is server thing, need to fix it. Have nothing to do with the module
image

@hi-reeve
Copy link

look like using t() inside computed property + using it with Head was causing problems, and after moving to useHead/useSeoMeta graph look like this the spike at the end is server thing, need to fix it. Have nothing to do with the module image

have you found any workaround?

@cyperdark
Copy link
Author

have you found any workaround?

i point out that i switched from head tag in template to useHead in script

@kazupon kazupon added workaround and removed ❗ p4-important Priority 4: bugs that violate documented behavior, or significantly impact perf labels Jul 21, 2023 — with Volta.net
@danielroe
Copy link
Contributor

I can confirm this is still present, when using i18n in a computed property after an awaited promise. Here is a minimal reproduction: https://github.com/danielroe/i18n-memory-leak.

@kazupon kazupon added the ❗ p4-important Priority 4: bugs that violate documented behavior, or significantly impact perf label Aug 1, 2023 — with Volta.net
Copy link
Collaborator

kazupon commented Aug 13, 2023

I used node -inspect and chrome devtools profile to investigate memory leaks.
image.png

It seems that memory leaks occur in top level await on script setup, when computed is reactive to wrap t.

Incidentally, memory leak does not seem to occur when there is no top level await.
The await is transformed to withAsyncContext by vite, which seems to affect the memory leak. but, I'm not sure if that is the cause.

It could be a problem with the reactivity implementation of t.

Copy link
Collaborator

kazupon commented Aug 14, 2023

add information.

like the below code, I've found that if await is used after computed , no memory leak.

<script setup lang="ts">
const { $i18n } = useNuxtApp()

const timeLabel = computed(() => $i18n.t('general'))

await new Promise(resolve => setTimeout(resolve, 10))
</script>

@cyperdark
Copy link
Author

I used node -inspect and chrome devtools profile to investigate memory leaks. image.png

I dont know how to use this/read -inspect, so can i ask you to test this use case?:

and if it's still have memory leak int it, and if it's not possible to prevent this, then it worth to mention this somewhere, so people will be aware of it

<script setup lang="ts">
const { t } = useI18n();

const title = computed(() => {
    return `${t('title')}`;
});
</script>
<Head>
    <Title>{{ title }}</Title>
</Head>

Copy link
Collaborator

kazupon commented Aug 24, 2023

I dont know how to use this/read -inspect, so can i ask you to test this use case?:

You can try to reproduce the below:

# build nuxt app
npm run build

# start nuxt app with `node --inspect`
HOST=localhost node --trace-gc --inspect --max-old-space-size=600 ./node_modules/nuxt/bin/nuxt.mjs preview

To stress test it:

npx autocannon --duration 50 http://localhost:3000

@cyperdark
Copy link
Author

<script setup lang="ts">
const { t } = useI18n();

const title = computed(() => {
    return `${t('title')}`;
});
</script>

tested, no memory leak for this case, or i tested in wrong way, idk

@hakan-akgul
Copy link

Hi

The problem still continues on@nuxtjs/i18n v8.0.0-rc.5
As I remember, I saw deep copy function when I dive in heap snapshot on node.js v16.13.0
Sadly, I uninstalled the plugin. Waiting for the fix.

It was our memory graph:
image

Hope it helps

Copy link
Collaborator

kazupon commented Oct 22, 2023

@hakan-akgul
Thank you for your reporting!
We need your minimal reproduction.
Could you give us about it, please? 🙏

@hakan-akgul
Copy link

Sorry for the late return.
My colleague found the saved snapshots.

I need some time; I want to return with some useful information if possible.

We get the snapshots from test environment but the huge problem on the production.
The leaks occurred so fast in production.
So, it is hard to understand the root cause from those snapshots.

I don't know if it is useful or not. Maybe you can find a pattern:

  • We don't use computed like the examples above.
  • We don't use routes like foo.com/tr/bar.
  • We use the components that import i18n, directly in layouts.
  • We use json files in folders like:
languages/en/en.json
languages/tr/tr.json

@sync42johnny
Copy link

  • We don't use routes like foo.com/tr/bar.

I've encountered a similar issue in our application. Interestingly, the app would spontaneously switch to Turkish (tr) by itself. Although we do have a route setup for tr, as you've mentioned, the switch occurs without any explicit trigger. It's quite perplexing, and it seems this is related to the issue raised by @hakan-akgul.

@memic84
Copy link

memic84 commented Oct 27, 2023

  • We don't use routes like foo.com/tr/bar.

I've encountered a similar issue in our application. Interestingly, the app would spontaneously switch to Turkish (tr) by itself. Although we do have a route setup for tr, as you've mentioned, the switch occurs without any explicit trigger. It's quite perplexing, and it seems this is related to the issue raised by @hakan-akgul.

Do you encounter also memory leaks in the application?

@agracia-foticos
Copy link

Same issue #2612

@BobbieGoede BobbieGoede mentioned this issue Dec 12, 2023
7 tasks
@BobbieGoede
Copy link
Collaborator

The latest edge release contains a fix for the (what was likely the larger) memory leak, please let me know if you can confirm this in your project! Installing it as alias: npm i -D @nuxtjs/i18n@npm:@nuxtjs/i18n-edge, you may need to delete node_modules and any lockfiles.

From my testing it seems like there is still a smaller memory leak present, I'm still working on finding the cause and fixing that.

@dargmuesli
Copy link
Collaborator

Also, if you're seeing some updated and head related debugging information, @BobbieGoede, see my potential finding in unjs/unhead#281 (comment)

@s00d
Copy link
Contributor

s00d commented Dec 13, 2023

The latest edge release contains a fix for the (what was likely the larger) memory leak, please let me know if you can confirm this in your project! Installing it as alias: npm i -D @nuxtjs/i18n@npm:@nuxtjs/i18n-edge, you may need to delete node_modules and any lockfiles.

From my testing it seems like there is still a smaller memory leak present, I'm still working on finding the cause and fixing that.
Thank you, this helped.

Before the optimization, it consumed 1GB per 1000 requests. Now, after checking, it consistently stays at the level of 100-300MB. I tested it with 10,000 requests

After 5000 requests, the following error appeared:

[nuxt] [request error] [unhandled] [500] Maximum call stack size exceeded
  at useNuxtApp (./.output/server/chunks/app/server.mjs:497:20)  
  at ./.output/server/chunks/app/server.mjs:10255:33  
  at ./.output/server/chunks/app/server.mjs:10255:33  
  at ./.output/server/chunks/app/server.mjs:10258:29  
  at ./.output/server/chunks/app/server.mjs:10258:29  
  at ./.output/server/chunks/app/server.mjs:10258:29  
  at ./.output/server/chunks/app/server.mjs:10258:29  
  at ./.output/server/chunks/app/server.mjs:10258:29  
  at ./.output/server/chunks/app/server.mjs:10258:29  
  at ./.output/server/chunks/app/server.mjs:10258:29  

./.output/server/chunks/app/server.mjs:497:20

function useNuxtApp() {

./.output/server/chunks/app/server.mjs:10255:33

function extendBaseUrl(baseUrl, options) {
  return () => {
    var _a;
    const ctx = /* @__PURE__ */ useNuxtApp();

It seems that there is an issue with a memory leak, and the extendBaseUrl function is only using i18n. There might be another problem elsewhere.

@BobbieGoede
Copy link
Collaborator

After 5000 requests, the following error appeared:

@s00d
Could you share a reproduction and describe your testing method? And do you use a function to set baseUrl? Hitting the maximum call stack should be caused by excessive function calls, maybe there is a loop or recursion happening in your code.

@s00d
Copy link
Contributor

s00d commented Dec 14, 2023

After 5000 requests, the following error appeared:

@s00d Could you share a reproduction and describe your testing method? And do you use a function to set baseUrl? Hitting the maximum call stack should be caused by excessive function calls, maybe there is a loop or recursion happening in your code.

Sure, of course.
Here's the repo
https://github.com/s00d/max-call-err
It's very simple to check

npm i
npm run build 
node .output/server/index.mjs 
ab -n 10000 -c 100 http://localhost:3000/

ab is a utility for macOS, it allows you to send a bunch of requests in parallel
I'm not sure there are analogues on other OSs, but you can try like this

// macOS
brew install httpd

// debian
sudo apt-get install apache2-utils

// CentOS
sudo yum install httpd-tools

on windows, it should work through bash in theory

The project is completely clean, only i18n is installed, it is reproduced on previous versions.

I've added
Error.stackTraceLimit = Infinity;
to .output/server/index.mjs
Here's the full error stack:

[nuxt] [request error] [unhandled] [500] Maximum call stack size exceeded
at ./.output/server/index.mjs:12899:10
  at ./.output/server/index.mjs:12904:29
  at ./.output/server/index.mjs:12904:29
  at ./.output/server/index.mjs:12904:29
// ....
  at ./.output/server/index.mjs:12904:29  
  at ./.output/server/index.mjs:12904:29  
  at ./.output/server/index.mjs:12904:29  
  at resolveBaseUrl (./.output/server/index.mjs:11475:12)  
  at extendComposer (./.output/server/index.mjs:11627:22)  
  at ./.output/server/index.mjs:11585:7  
  at EffectScope.run (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.js:42:16)  
  at i18n.install (./.output/server/index.mjs:11584:11)  
  at Object.use (./.output/server/node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:3778:18)  
  at setup (./.output/server/index.mjs:13211:9)  
  at process.processTicksAndRejections (node:internal/process/task_queues:95:5)  
  at async Object.callAsync (./.output/server/index.mjs:5133:16)  
  at async applyPlugin (./.output/server/index.mjs:6747:35)  
  at async applyPlugins (./.output/server/index.mjs:6767:7)  
  at async createNuxtAppServer (./.output/server/index.mjs:13627:7)  
  at async Object.renderToString (./.output/server/node_modules/vue-bundle-renderer/dist/runtime.mjs:173:19)  
  at async ./.output/server/index.mjs:6118:21  
  at async ./.output/server/index.mjs:5172:22  
  at async Object.handler (./.output/server/index.mjs:2293:19)  
  at async Server.toNodeHandle (./.output/server/index.mjs:2482:7)

The memory leak is completely gone. After finishing the requests, the memory consumption resets to normal. However, the problem with "Maximum call stack size exceeded" error is definitely related to i18n. If I disable the module, the issue disappears.

The problem lies in the extendBaseUrl method of the baseUrl function. The baseUrl function is referencing itself, and with each subsequent request, the nesting of calls increases. After around 6500 requests, everything crashes.

I suspect that it all starts here:

nuxtI18nOptions.baseUrl = extendBaseUrl(nuxtI18nOptions.baseUrl, {

The method is referencing itself, leading to an infinite loop.

Maybe this fix could help:

#2620

But it's still better to double-check everything

@BobbieGoede
Copy link
Collaborator

Thanks for providing the reproduction and testing method, I have been able to replicate this locally and I think you're right about it being caused by extendBaseUrl. I guess it is wrapping itself in a new function on every request 🤯..

Based on the issue @s00d found I'll keep this issue open until that has been fixed. While the memory usage is more stable in rc.10, I can't recommend using it with this issue present!

Will be checking out the provided fix, expect another release soon.

@dargmuesli
Copy link
Collaborator

Server be chillin' again. Great! 😎

image

("Festplatte" being hard drive)

@BobbieGoede
Copy link
Collaborator

Closing as the memory leak in the provided reproductions have been fixed, we will track the other potential leak in #2612. If anyone is still experiencing memory leaks, please let us know there (and provide a reproduction if possible 🙏)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
❗ p4-important Priority 4: bugs that violate documented behavior, or significantly impact perf upstream issue v8 workaround
Projects
None yet
Development

No branches or pull requests