Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unraid container crashes after running for a short period, won't start back up. #647

Closed
TheAlchemist606 opened this issue May 11, 2022 · 9 comments
Assignees
Labels
🐛 Bug [ISSUE] Ticket describing something that isn't working 🕸️ Inactive

Comments

@TheAlchemist606
Copy link

TheAlchemist606 commented May 11, 2022

Environment

Self-Hosted (Docker)

Version

2.0.8

Describe the problem

After the docker has been running for a few minutes, it crashes. On relaunch, the only contents in the log is:

  • Building for production...
    WARN A new version of sass-loader is available. Please upgrade for best experience.
    error Command failed with signal "SIGSEGV".
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    ERROR: "build-watch" exited with 1.
    error Command failed with exit code 1.
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

Relaunching will not immediately work. I have to wait a few minutes before relaunching for it to launch without crashing. Once it does launch without crashing, this is the log that follows: https://pastebin.com/G3HaVNPX

Additional info

No response

Please tick the boxes

Fast, reliable, and secure dependency management.
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
@TheAlchemist606 TheAlchemist606 added the 🐛 Bug [ISSUE] Ticket describing something that isn't working label May 11, 2022
@Lissy93 Lissy93 mentioned this issue May 11, 2022
4 tasks
@Lissy93
Copy link
Owner

Lissy93 commented May 12, 2022

Thanks for the ticket, and sorry about that.
There was another similar issue a few days ago (#637), which was a mem leak, and I couldn't recreate. But after seeing yours it seems this is indeed a memory leak, and seems to only happen on Unraid servers. I believe it may be caused by the Alpine Node base image, but have yet to be able to recreate it. Will keep you posted.

@TheAlchemist606
Copy link
Author

TheAlchemist606 commented May 13, 2022

I don't know if this helps or not, but I've noticed that when initially starting the container the memory usage jumps up to 1GB, then eventually mellows out to about 250 MB. When making any change to the config file, it jumps to over 1 GB again.

At the time of testing this, I am running Unraid version 6.10.0-rc5, by the way. That wasn't noted in my original comment. I'm updating to rc8 to see if that makes any difference.

@liss-bot liss-bot added the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label May 13, 2022
@jeremytodd1
Copy link

To help provide the info, the version of Unraid that I'm using is 6.9.2.

I wish my container would mellow out at around 250MB lol. Mine eventually consumes as much memory as it possibly can and then shuts down the container.

@liss-bot liss-bot removed the 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending label May 14, 2022
@jeremytodd1
Copy link

jeremytodd1 commented May 14, 2022

So I may have some useful information.

My containers' tags are pretty much always just set to "latest" so that they'll get whatever newest stable release is out.

I set the container's tag back a couple to see when the issue would start happening. It looks like 2.0.6 is the last version that works correctly with no memory leak. I've had the container on for about 20 minutes now and it's still just saying right at around 330MB.

As soon as I set the tag to "2.0.7" the memory leak starts happening again.

I'm no developer, but I have an idea of what it could be. This could be way off as I have no idea what I'm talking about but I'm just throwing out a theory lol.

For Dashy, I use the little green/red dot tool to check if a service is online or not. I've never been able to get Home Assistant, Radarr, and Sonarr set up with it so I disabled the online checker for those services. For reference, here is what my dashboard looks like in 2.0.6:

https://i.imgur.com/4n0XwWL.png
(I've red boxed the relevant parts).

Now look what it looks like in 2.0.7:
https://i.imgur.com/LutpeyE.png

No changes were made to the configuration files, and yet it looks like that online checker is running for those three services. Could it be that that online checker is constantly running and failing or something causing the memory leak?

Again, just theorizing, but yeah. Thought I'd provide this info.

@liss-bot liss-bot added 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending and removed 👤 Awaiting Maintainer Response [ISSUE] Response from repo author is pending labels May 14, 2022
@Lissy93
Copy link
Owner

Lissy93 commented May 15, 2022

That's really helpful, thank you. Weird because there were no backend changes in that release, but there was a change in the upstream base image (node:16.13.2-alpine). If that was the cause, then the fix should just be to pin the Docker base image to a more stable version.

In the meantime, are you okay to stick with 2.0.6, and I will let you know here once I've got a fix ready?

That status check bug was also raised in #651, and I've got a fix ready for the next update :)

@liss-bot

This comment was marked as spam.

@liss-bot liss-bot added the ⚰️ Stale [ISSUE] [PR] No activity for over 1 month label Jun 15, 2022
@liss-bot

This comment was marked as spam.

@CrazyWolf13
Copy link
Collaborator

@jeremytodd1 @TheAlchemist606

Can one of you confirm, if this is still an issue?

Thanks!

@CrazyWolf13 CrazyWolf13 added ⚰️ Stale [ISSUE] [PR] No activity for over 1 month 🛑 No Response [ISSUE] Response was requested, but has not been provided and removed 🛑 No Response [ISSUE] Response was requested, but has not been provided labels May 10, 2024
@liss-bot
Copy link
Collaborator

This issue was automatically closed because it has been stalled for over 1 year with no activity.

@github-project-automation github-project-automation bot moved this from Awaiting Triage to Done in Dashy V3 May 16, 2024
@liss-bot liss-bot removed the ⚰️ Stale [ISSUE] [PR] No activity for over 1 month label May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug [ISSUE] Ticket describing something that isn't working 🕸️ Inactive
Projects
Status: Done
Development

No branches or pull requests

5 participants