Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Podcasts with a huge amount of episodes makes the application unstable #1549

Closed
aunefyren opened this issue Feb 24, 2023 · 6 comments
Closed
Labels
bug Something isn't working

Comments

@aunefyren
Copy link

Describe the issue

I listen to one podcast which has 1500 episodes, which understandably, is pretty hard for the platform to manage. Every time I click on the podcast within the website/Android app it freezes for a bit. When the page with all remaining episodes is loaded, I am typically allowed one action. So I normally start the next episode, which plays as it should. When the episode finishes, and I am planning on starting the next one, I need to close the website/app and repeat the process. If I don't restart it, the app/website is unresponsive.

Loading episodes in "pages" might be a fix if the issue is the front end being overloaded with data. Perhaps even make the "max-number of items per page" option customizable if this is a niche issue? That is at least the only solution I can imagine.

Thank you for taking the time to read my feedback.

Steps to reproduce the issue

  1. Have a podcast library with 1500+ episodes.
  2. Click on the podcast from anywhere

Audiobookshelf version

v2.2.15

How are you running audiobookshelf?

Docker

@aunefyren aunefyren added the bug Something isn't working label Feb 24, 2023
@unknownmaster038
Copy link

unknownmaster038 commented Oct 11, 2023

I'm having a similar issue, except my container crashes entirely with a "FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory" in the logs. In order to resolve the issue, I've moved the extra files out of the Podcast folder. Less than 500 episodes seems to be stable, but I can't be entirely sure.

@undaunt
Copy link

undaunt commented Oct 17, 2023

I'm seeing this behavior on 2.4.4. Every hour I see this:

Config /config /metadata
[2023-10-17 15:00:25] INFO: === Starting Server ===
[2023-10-17 15:00:25] INFO: [Server] Init v2.4.4
[2023-10-17 15:00:25] INFO: [Database] Initializing db at "/config/absdatabase.sqlite"
[2023-10-17 15:00:25] INFO: [Database] Db connection was successful
[2023-10-17 15:00:25] INFO: [Database] Db initialized with models: user, library, libraryFolder, book, podcast, podcastEpisode, libraryItem, mediaProgress, series, bookSeries, author, bookAuthor, collection, collectionBook, playlist, playlistMediaItem, device, playbackSession, feed, feedEpisode, setting
[2023-10-17 15:00:26] INFO: [BackupManager] 15 Backups Found
[2023-10-17 15:00:26] INFO: [LogManager] Init current daily log filename: 2023-10-17.txt
[2023-10-17 15:00:26] INFO: [Watcher] Initializing watcher for "Podcasts".
[2023-10-17 15:00:26] INFO: Listening on port :80
[2023-10-17 15:00:26] INFO: [Watcher] "Podcasts" Ready
[2023-10-17 15:28:07] INFO: [SocketAuthority] Socket Connected 5M0c0J1o9ZLDkVADAAAB
[2023-10-17 15:29:17] INFO: [SocketAuthority] Socket 5M0c0J1o9ZLDkVADAAAB disconnected from client "adam" after 69978ms (Reason: ping timeout)
[2023-10-17 15:36:14] INFO: [SocketAuthority] Socket Connected ZT3bDxOfNz1Bz9zaAAAD
[2023-10-17 15:38:14] INFO: [SocketAuthority] Socket ZT3bDxOfNz1Bz9zaAAAD disconnected from client "adam" after 120160ms (Reason: ping timeout)
<--- Last few GCs --->
[1:0x7f9134b153f0]  3598931 ms: Mark-sweep (reduce) 2046.1 (2056.4) -> 2046.1 (2056.6) MB, 11.1 / 0.0 ms  (+ 11.7 ms in 1 steps since start of marking, biggest step 11.7 ms, walltime since start of marking 27 ms) (average mu = 0.988, current mu = 0.327) a
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

docker-compose, Ubuntu host, one podcast has over 1700 episodes.

@aunefyren
Copy link
Author

I'm seeing this behavior on 2.4.4. Every hour I see this:

Config /config /metadata
[2023-10-17 15:00:25] INFO: === Starting Server ===
[2023-10-17 15:00:25] INFO: [Server] Init v2.4.4
[2023-10-17 15:00:25] INFO: [Database] Initializing db at "/config/absdatabase.sqlite"
[2023-10-17 15:00:25] INFO: [Database] Db connection was successful
[2023-10-17 15:00:25] INFO: [Database] Db initialized with models: user, library, libraryFolder, book, podcast, podcastEpisode, libraryItem, mediaProgress, series, bookSeries, author, bookAuthor, collection, collectionBook, playlist, playlistMediaItem, device, playbackSession, feed, feedEpisode, setting
[2023-10-17 15:00:26] INFO: [BackupManager] 15 Backups Found
[2023-10-17 15:00:26] INFO: [LogManager] Init current daily log filename: 2023-10-17.txt
[2023-10-17 15:00:26] INFO: [Watcher] Initializing watcher for "Podcasts".
[2023-10-17 15:00:26] INFO: Listening on port :80
[2023-10-17 15:00:26] INFO: [Watcher] "Podcasts" Ready
[2023-10-17 15:28:07] INFO: [SocketAuthority] Socket Connected 5M0c0J1o9ZLDkVADAAAB
[2023-10-17 15:29:17] INFO: [SocketAuthority] Socket 5M0c0J1o9ZLDkVADAAAB disconnected from client "adam" after 69978ms (Reason: ping timeout)
[2023-10-17 15:36:14] INFO: [SocketAuthority] Socket Connected ZT3bDxOfNz1Bz9zaAAAD
[2023-10-17 15:38:14] INFO: [SocketAuthority] Socket ZT3bDxOfNz1Bz9zaAAAD disconnected from client "adam" after 120160ms (Reason: ping timeout)
<--- Last few GCs --->
[1:0x7f9134b153f0]  3598931 ms: Mark-sweep (reduce) 2046.1 (2056.4) -> 2046.1 (2056.6) MB, 11.1 / 0.0 ms  (+ 11.7 ms in 1 steps since start of marking, biggest step 11.7 ms, walltime since start of marking 27 ms) (average mu = 0.988, current mu = 0.327) a
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

docker-compose, Ubuntu host, one podcast has over 1700 episodes.

This might help you load the podcast tracks: #2075 (comment)

@advplyr advplyr pinned this issue Dec 29, 2023
@advplyr advplyr added the awaiting release Issue is resolved and will be in the next release label Dec 31, 2023
@advplyr
Copy link
Owner

advplyr commented Dec 31, 2023

Fixed in v2.7.1

It can still be improved but should be useable now. I tested with a 1100 episode podcast. Let me know how it goes

@advplyr advplyr closed this as completed Dec 31, 2023
@advplyr advplyr removed the awaiting release Issue is resolved and will be in the next release label Dec 31, 2023
@advplyr advplyr unpinned this issue Jan 1, 2024
@aunefyren
Copy link
Author

aunefyren commented Jan 1, 2024

Amazing, it is night and day to me. The entire platform is less sluggish, and the podcast pages with a ridiculous amount of episodes load way faster than would be expected. Great work.

@revilo951
Copy link
Contributor

Definitetly a drastic improvement on post 2.3.3 releases, however still not as fast as 2.3.3 - library page still takes several seconds to load, for example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants