Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak and device crash #4437

Closed
timkgh opened this issue Mar 27, 2022 · 92 comments
Closed

Memory leak and device crash #4437

timkgh opened this issue Mar 27, 2022 · 92 comments

Comments

@timkgh
Copy link

timkgh commented Mar 27, 2022

Issue Details

  • Version of AdGuard Home server:
    • v0.107.5
  • How did you install AdGuard Home:
    • GitHub releases
  • How did you setup DNS configuration:
    • router
  • If it's a router or IoT, please write device model:
    • Ubiquiti EdgeRouter ER-12, EdgeOS
  • CPU architecture:
    • mips64_softfloat (Cavium Octeon III V0.2 FPU V0.0)
  • Operating system and version:
    • EdgeOS v2.0.9-hotfix.2

Expected Behavior

Run for long periods of time without crashes/memory leak.

Actual Behavior

Leaks memory until the router is out of memory and it crashes.

Screenshots

After about 26 days of uptime:

agh1

After the screenshot above, I restarted AGH and it shows a reasonable memory footprint:

agh2

You can see the memory use on the device drops from 456M to 177M after an AGH restart, but it continues to climb until it eventually exhausts the entire device memory.

Additional Information

Cache settings:

cache_size: 4194304
cache_ttl_min: 300
cache_ttl_max: 86400
cache_optimistic: true
safebrowsing_cache_size: 1048576
safesearch_cache_size: 1048576
parental_cache_size: 1048576
cache_time: 30

Clearing the stats and logs while AGH is running does not help reclaim memory, only an AGH restart does.

@SiNONiMiTY
Copy link

Bumping this up. This needs visibility.

@EugeneOne1
Copy link
Member

EugeneOne1 commented Mar 31, 2022

@timkgh, hello and apologies for a late response. Unfortunally, we can't reproduce the issue on our test machines. Could you please answer a couple of questions:

  1. How many filter lists do you use? How many rules in total?
  2. What value of querylog_size_memory (section dns) is set in AdGuardHome.yaml file?

Also could you please collect some profiling info to enrich the issue details? Here is how to perform it:

  1. Set debug_pprof in AdGuardHome.yaml to true. This will enable some profiling HTTP endpoints for localhost (127.0.0.1);
  2. Then, run AGH for some time. A few minutes should be enough;
  3. Next, fetch the http://127.0.0.1:6060/debug/pprof/heap and save it to the text file;
  4. Finally, send the resulting file to devteam@adguard.com with the subject like "Issue 4437".

Sorry for a bit complicated request. Thanks.

@EugeneOne1 EugeneOne1 added the waiting for data Waiting for users to provide more data. label Mar 31, 2022
@timkgh
Copy link
Author

timkgh commented Apr 1, 2022

I believe it's a cache issue. I've been running it with:

cache_size: 0
cache_optimistic: false

for a few days now and it's stable, the memory use does not creep up.

I don't think the cache limit of 4M from my previous config is being enforced, it appears that the cache grows unbounded.

image

As for your other question, I don't run lots of filters, just filters from the built-in lists:

filters:
- enabled: true
  url: https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt
  name: AdGuard DNS filter
  id: 1
- enabled: true
  url: https://raw.githubusercontent.com/hoshsadiq/adblock-nocoin-list/master/hosts.txt
  name: NoCoin Filter List
  id: 1634786883
- enabled: true
  url: https://raw.githubusercontent.com/durablenapkin/scamblocklist/master/adguard.txt
  name: Scam Blocklist by DurableNapkin
  id: 1634786884
- enabled: true
  url: https://raw.githubusercontent.com/Spam404/lists/master/main-blacklist.txt
  name: Spam404
  id: 1634786885
- enabled: true
  url: https://curben.gitlab.io/malware-filter/urlhaus-filter-agh-online.txt
  name: Online Malicious URL Blocklist
  id: 1634786886
- enabled: true
  url: https://raw.githubusercontent.com/mitchellkrogza/The-Big-List-of-Hacked-Malware-Web-Sites/master/hosts
  name: The Big List of Hacked Malware Web Sites
  id: 1634786887
- enabled: true
  url: https://raw.githubusercontent.com/DandelionSprout/adfilt/master/Alternate%20versions%20Anti-Malware%20List/AntiMalwareAdGua
  name: Dandelion Sprout's Anti-Malware List
  id: 1640801008
- enabled: true
  url: https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/hosts/spy.txt
  name: WindowsSpyBlocker - Hosts spy rules
  id: 1645727393

Query log size:

querylog_size_memory: 1000

@EugeneOne1
Copy link
Member

@timkgh, thanks for clarifications. Could you please perform a few more tests?

  1. Restore the cache_size value to 4 MB and set cache_optimistic to false;
  2. Run AGH for a couple of days and check the memory usage statistics.

Shortly, we suspect the optimistic caching feature causes these leaks, so the above test should confirm or deny the assumption.

Also, we'd like to get a bit different profiling data due to your clarifications. So could you please do the same as I described here, but put http://127.0.0.1:6060/debug/pprof/goroutine instead of http://127.0.0.1:6060/debug/pprof/heap.

@EugeneOne1 EugeneOne1 added the needs investigation Needs to be reproduced reliably. label Apr 1, 2022
@timkgh
Copy link
Author

timkgh commented Apr 1, 2022

Trying cache_size: 4194304 and cache_optimistic: false

Meanwhile, I think you have sufficient info above to try to reproduce the issues in parallel and get the profiling you need.

It is also possible that these are leaking too, I have not turned them off and I do have the corresponding features enabled:

safebrowsing_cache_size: 1048576
safesearch_cache_size: 1048576
parental_cache_size: 1048576

@SiNONiMiTY
Copy link

Just an observation, it seems that the memory leak starts after filter automatic updates.

My configuration

cache_size: 0
cache_optimistic: false
filters_update_interval: 72

3 days ago, I restarted AGH after updating filter lists and memory usage is low. Today, after the scheduled automatic filter update, memory usage grew significantly. Will continue to observe this behavior.

@timkgh
Copy link
Author

timkgh commented Apr 2, 2022

@SiNONiMiTY you can try setting filters_update_interval: 1 and see if it accelerates the leak. I have it set at 1 and I don't think it's the cause of leaks. The memory use does fluctuate, it goes up and down but that's normal, garbage collection in Go. Using the web interface, e.g. the query log will also cause a memory spike but it seems to then come down. Of course, any of these actions can still cause slight leaks that will only become apparent over very long periods of time.

@SiNONiMiTY
Copy link

SiNONiMiTY commented Apr 2, 2022

@timkgh I will be trying this one. It is good that it goes down in your case. Have been looking at this for the past few days and this is the only consistent behavior I am seeing. I don't even have caching that can be another cause of memory leak.

Here are the preliminary results after setting update interval to 1 hour.

BEFORE
image

AFTER (UPDATE 1)
image

AFTER (UPDATE 2)
image

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 5, 2022

Issue Details

  • Version of AdGuard Home server:

    • v0.107.5
  • How did you install AdGuard Home:

    • GitHub releases
  • How did you setup DNS configuration:

    • router
  • If it's a router or IoT, please write device model:

    • Ubiquiti EdgeRouter ER-12, EdgeOS
  • CPU architecture:

    • mips64_softfloat (Cavium Octeon III V0.2 FPU V0.0)
  • Operating system and version:

    • EdgeOS v2.0.9-hotfix.2

Expected Behavior

Run for long periods of time without crashes/memory leak.

Actual Behavior

Leaks memory until the router is out of memory and it crashes.

Screenshots

After about 26 days of uptime:

agh1

After the screenshot above, I restarted AGH and it shows a reasonable memory footprint:

agh2

You can see the memory use on the device drops from 456M to 177M after an AGH restart, but it continues to climb until it eventually exhausts the entire device memory.

Additional Information

Cache settings:

cache_size: 4194304
cache_ttl_min: 300
cache_ttl_max: 86400
cache_optimistic: true
safebrowsing_cache_size: 1048576
safesearch_cache_size: 1048576
parental_cache_size: 1048576
cache_time: 30

Clearing the stats and logs while AGH is running does not help reclaim memory, only an AGH restart does.

@EugeneOne1 , it appears there maybe some form of anomaly being reported by users with ARM based arch as well concerning similar memory leak issues. I just wanted to bring this to your attention that it may not only be mipse arch impacted by the occurrance of this anomaly. You can see where I and other Adguardhome on asuswrt-merlin router users have been discussing it here at this link.

https://www.snbforums.com/threads/release-asuswrt-merlin-adguardhome-installer-amaghi.76506/page-40#post-755358

Here is where I have requested the aid of other asuswrt merlin users to see if we can get some more test.

http://www.snbforums.com/threads/adguardhome-vs-diversion-some-basics.78332/post-756017

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

3. http://127.0.0.1:6060/debug/pprof/heap

@EugeneOne1
does this help you?

/opt/bin/go/bin/go tool pprof -top http://localhost:6060/debug/pprof/heap
Fetching profile over HTTP from http://localhost:6060/debug/pprof/heap
Saved profile in /root/pprof/pprof.AdGuardHome.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
File: AdGuardHome
Type: inuse_space
Time: Apr 6, 2022 at 1:44am (UTC)
Showing nodes accounting for 82.75MB, 100% of 82.75MB total
flat flat% sum% cum cum%
67.74MB 81.86% 81.86% 77.74MB 93.94% github.com/AdguardTeam/urlfilter.NewDNSEngine
10MB 12.09% 93.94% 10MB 12.09% github.com/AdguardTeam/urlfilter.(*DNSEngine).addRule
1MB 1.21% 95.15% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(*cacheItem).pack
0.51MB 0.61% 95.76% 0.51MB 0.61% crypto/x509.(*CertPool).addCertFunc (inline)
0.50MB 0.61% 96.37% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/aghnet.(*fsysARPDB).Refresh
0.50MB 0.61% 96.98% 0.50MB 0.61% golang.org/x/net/http2/hpack.newInternalNode (inline)
0.50MB 0.61% 97.58% 0.50MB 0.61% encoding/pem.Decode
0.50MB 0.6% 98.19% 0.50MB 0.6% runtime.allocm
0.50MB 0.6% 98.79% 0.50MB 0.6% runtime.malg
0.50MB 0.6% 99.40% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/home.Main.func1
0.50MB 0.6% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.readLine
0 0% 100% 1.01MB 1.22% crypto/x509.(*CertPool).AppendCertsFromPEM
0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/aghnet.(*arpdbs).Refresh
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(*Server).filterDNSRequest
0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(*Server).handleDNSRequest
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(*Server).processFilteringBeforeRequest
0 0% 100% 1MB 1.21% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(*Server).processUpstream
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).CheckHost
0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).SetFilters
0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).initFiltering
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).matchHost
0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).Reload (inline)
0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).addFromSystemARP
0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).periodicUpdate
0 0% 100% 1.01MB 1.22% github.com/AdguardTeam/AdGuardHome/internal/home.LoadSystemRootCAs
0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/home.enableFiltersLocked
0 0% 100% 1.01MB 1.22% github.com/AdguardTeam/AdGuardHome/internal/home.run
0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/home.run.func2
0 0% 100% 1.01MB 1.22% github.com/AdguardTeam/AdGuardHome/internal/home.setupContext
0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/home.startDNSServer
0 0% 100% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).Resolve
0 0% 100% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).cacheResp
0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).handleDNSRequest
0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).udpHandlePacket
0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).udpPacketLoop.func1
0 0% 100% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(*cache).set
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter.(*DNSEngine).MatchRequest
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter.(*DNSEngine).matchLookupTable
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.(*FileRuleList).RetrieveRule
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.(*RuleStorage).RetrieveHostRule
0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.(*RuleStorage).RetrieveRule
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(*ClientConn).readLoop
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(*Framer).ReadFrame
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(*Framer).readMetaFrame
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(*clientConnReadLoop).run
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(*Decoder).Write
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(*Decoder).parseFieldLiteral
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(*Decoder).parseHeaderFieldRepr
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(*Decoder).readString
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.buildRootHuffmanNode
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.getRootHuffmanNode (inline)
0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.huffmanDecode
0 0% 100% 0.50MB 0.6% runtime.mstart
0 0% 100% 0.50MB 0.6% runtime.mstart0
0 0% 100% 0.50MB 0.6% runtime.mstart1
0 0% 100% 0.50MB 0.6% runtime.newm
0 0% 100% 0.50MB 0.6% runtime.newproc.func1
0 0% 100% 0.50MB 0.6% runtime.newproc1
0 0% 100% 0.50MB 0.6% runtime.resetspinning
0 0% 100% 0.50MB 0.6% runtime.schedule
0 0% 100% 0.50MB 0.6% runtime.startm
0 0% 100% 0.50MB 0.6% runtime.systemstack
0 0% 100% 0.50MB 0.6% runtime.wakep
0 0% 100% 0.50MB 0.61% sync.(*Once).Do (inline)
0 0% 100% 0.50MB 0.61% sync.(*Once).doSlow

I also took the time to email the pprof per request.

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

http://127.0.0.1:6060/debug/pprof/goroutine

@EugeneOne1

Per request for goroutine, here is the pprof for that.

/tmp/home/root# /opt/bin/go/bin/go tool pprof -top http://localhost:6060/debug/pprof/goroutine
Fetching profile over HTTP from http://localhost:6060/debug/pprof/goroutine
Saved profile in /root/pprof/pprof.AdGuardHome.goroutine.001.pb.gz
File: AdGuardHome
Type: goroutine
Time: Apr 6, 2022 at 1:55am (UTC)
Showing nodes accounting for 28, 100% of 28 total
flat flat% sum% cum cum%
24 85.71% 85.71% 24 85.71% runtime.gopark
1 3.57% 89.29% 1 3.57% net/http.(*connReader).backgroundRead
1 3.57% 92.86% 1 3.57% runtime.notetsleepg
1 3.57% 96.43% 1 3.57% runtime/pprof.runtime_goroutineProfileWithLabels
1 3.57% 100% 1 3.57% syscall.Syscall6
0 0% 100% 2 7.14% bufio.(*Reader).Read
0 0% 100% 2 7.14% bytes.(*Buffer).ReadFrom
0 0% 100% 2 7.14% crypto/tls.(*Conn).Read
0 0% 100% 2 7.14% crypto/tls.(*Conn).readFromUntil
0 0% 100% 2 7.14% crypto/tls.(*Conn).readRecord (inline)
0 0% 100% 2 7.14% crypto/tls.(*Conn).readRecordOrCCS
0 0% 100% 2 7.14% crypto/tls.(*atLeastReader).Read
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/aghnet.(*HostsContainer).handleEvents
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/aghos.(*osWatcher).handleErrors
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/aghos.(*osWatcher).handleEvents
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).filtersInitializer
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*Filtering).periodicallyRefreshFilters
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*RDNS).workerLoop
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*WHOIS).workerLoop
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start.func1
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).startBetaServer.func1
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).tlsServerLoop
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).handleHostsUpdates
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).periodicUpdate
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.Main
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.Main.func1
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.handleServiceControlAction
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.memoryUsage.func1
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.run
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.run.func1
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/querylog.(*queryLog).periodicRotate
0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/stats.(*statsCtx).periodicFlush
0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).tcpPacketLoop
0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).udpPacketLoop
0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxyutil.UDPRead (inline)
0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxyutil.udpRead
0 0% 100% 1 3.57% github.com/fsnotify/fsnotify.(*Watcher).readEvents
0 0% 100% 1 3.57% github.com/fsnotify/fsnotify.(*fdPoller).wait
0 0% 100% 1 3.57% github.com/kardianos/service.(*sysv).Run
0 0% 100% 1 3.57% github.com/kardianos/service.(*sysv).Run.func1
0 0% 100% 1 3.57% github.com/patrickmn/go-cache.(*janitor).Run
0 0% 100% 2 7.14% golang.org/x/net/http2.(*ClientConn).readLoop
0 0% 100% 2 7.14% golang.org/x/net/http2.(*Framer).ReadFrame
0 0% 100% 2 7.14% golang.org/x/net/http2.(*clientConnReadLoop).run
0 0% 100% 2 7.14% golang.org/x/net/http2.readFrameHeader
0 0% 100% 1 3.57% golang.org/x/sys/unix.EpollWait
0 0% 100% 4 14.29% internal/poll.(*FD).Accept
0 0% 100% 2 7.14% internal/poll.(*FD).Read
0 0% 100% 1 3.57% internal/poll.(*FD).ReadMsg
0 0% 100% 7 25.00% internal/poll.(*pollDesc).wait
0 0% 100% 7 25.00% internal/poll.(*pollDesc).waitRead (inline)
0 0% 100% 7 25.00% internal/poll.runtime_pollWait
0 0% 100% 2 7.14% io.ReadAtLeast
0 0% 100% 2 7.14% io.ReadFull (inline)
0 0% 100% 1 3.57% main.main
0 0% 100% 4 14.29% net.(*TCPListener).Accept
0 0% 100% 4 14.29% net.(*TCPListener).accept
0 0% 100% 1 3.57% net.(*UDPConn).ReadMsgUDP
0 0% 100% 1 3.57% net.(*UDPConn).readMsg
0 0% 100% 2 7.14% net.(*conn).Read
0 0% 100% 2 7.14% net.(*netFD).Read
0 0% 100% 4 14.29% net.(*netFD).accept
0 0% 100% 1 3.57% net.(*netFD).readMsg
0 0% 100% 1 3.57% net/http.(*ServeMux).ServeHTTP
0 0% 100% 3 10.71% net/http.(*Server).ListenAndServe
0 0% 100% 3 10.71% net/http.(*Server).Serve
0 0% 100% 1 3.57% net/http.(*conn).serve
0 0% 100% 1 3.57% net/http.HandlerFunc.ServeHTTP
0 0% 100% 1 3.57% net/http.ListenAndServe (inline)
0 0% 100% 1 3.57% net/http.serverHandler.ServeHTTP
0 0% 100% 1 3.57% net/http/pprof.Index
0 0% 100% 1 3.57% net/http/pprof.handler.ServeHTTP
0 0% 100% 1 3.57% os/signal.loop
0 0% 100% 1 3.57% os/signal.signal_recv
0 0% 100% 11 39.29% runtime.chanrecv
0 0% 100% 4 14.29% runtime.chanrecv1
0 0% 100% 7 25.00% runtime.chanrecv2
0 0% 100% 1 3.57% runtime.goparkunlock (inline)
0 0% 100% 1 3.57% runtime.main
0 0% 100% 7 25.00% runtime.netpollblock
0 0% 100% 2 7.14% runtime.selectgo
0 0% 100% 1 3.57% runtime/pprof.(*Profile).WriteTo
0 0% 100% 1 3.57% runtime/pprof.writeGoroutine
0 0% 100% 1 3.57% runtime/pprof.writeRuntimeProfile
0 0% 100% 1 3.57% sync.(*Cond).Wait
0 0% 100% 1 3.57% sync.runtime_notifyListWait
0 0% 100% 3 10.71% time.Sleep

I will also submit it per email as well.

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

@EugeneOne1

Feel free to request any other info. I will provide what ever I am able to gain access to. I will also be available for test if you have any suggestions.

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

goroutine 57646 [running]:

runtime/pprof.writeGoroutineStacks({0x1c867c0, 0x400029e460})
runtime/pprof/pprof.go:693 +0x8c
runtime/pprof.writeGoroutine({0x1c867c0, 0x400029e460}, 0x2)
runtime/pprof/pprof.go:682 +0x3c
runtime/pprof.(*Profile).WriteTo(0x21eaf80, {0x1c867c0, 0x400029e460}, 0x2)
runtime/pprof/pprof.go:331 +0x154
net/http/pprof.handler.ServeHTTP({0x4007dff7b1, 0x9}, {0x1c930f0, 0x400029e460}, 0x40002f1400)
net/http/pprof/pprof.go:253 +0x520
net/http/pprof.Index({0x1c930f0, 0x400029e460}, 0x40002f1400)
net/http/pprof/pprof.go:371 +0x104
net/http.HandlerFunc.ServeHTTP(0x1bb9ba8, {0x1c930f0, 0x400029e460}, 0x40002f1400)
net/http/server.go:2047 +0x40
net/http.(*ServeMux).ServeHTTP(0x4000290000, {0x1c930f0, 0x400029e460}, 0x40002f1400)
net/http/server.go:2425 +0x18c
net/http.serverHandler.ServeHTTP({0x400029e000}, {0x1c930f0, 0x400029e460}, 0x40002f1400)
net/http/server.go:2879 +0x45c
net/http.(*conn).serve(0x40080fd540, {0x1c9af90, 0x4000195890})
net/http/server.go:1930 +0xb54
created by net/http.(*Server).Serve
net/http/server.go:3034 +0x4ac

goroutine 1 [chan receive, 52 minutes]:

github.com/kardianos/service.(*sysv).Run.func1()
github.com/kardianos/service@v1.2.0/service_sysv_linux.go:147 +0xa8
github.com/kardianos/service.(*sysv).Run(0x40001959e0)
github.com/kardianos/service@v1.2.0/service_sysv_linux.go:148 +0xc8
github.com/AdguardTeam/AdGuardHome/internal/home.handleServiceControlAction({0x0, {0x7fe4ec4e71, 0x25}, {0x7fe4ec4e9a, 0x14}, {0x0, 0x0, 0x0}, 0x0, {0x7fe4ec4eeb, ...}, ...}, ...)
github.com/AdguardTeam/AdGuardHome/internal/home/service.go:214 +0x424
github.com/AdguardTeam/AdGuardHome/internal/home.Main({0x1c86e00, 0x1cb8220})
github.com/AdguardTeam/AdGuardHome/internal/home/home.go:123 +0x164
main.main()
github.com/AdguardTeam/AdGuardHome/main.go:17 +0x3c

goroutine 7 [syscall, 52 minutes]:

os/signal.signal_recv()
runtime/sigqueue.go:169 +0x108
os/signal.loop()
os/signal/signal_unix.go:24 +0x20
created by os/signal.Notify.func1.1
os/signal/signal.go:151 +0x38

goroutine 8 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.Main.func1()
github.com/AdguardTeam/AdGuardHome/internal/home/home.go:107 +0x3c
created by github.com/AdguardTeam/AdGuardHome/internal/home.Main
github.com/AdguardTeam/AdGuardHome/internal/home/home.go:105 +0x124

goroutine 9 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start(0x40004cc0e0)
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:188 +0x34c
github.com/AdguardTeam/AdGuardHome/internal/home.run({0x0, {0x7fe4ec4e71, 0x25}, {0x7fe4ec4e9a, 0x14}, {0x0, 0x0, 0x0}, 0x0, {0x7fe4ec4eeb, ...}, ...}, ...)
github.com/AdguardTeam/AdGuardHome/internal/home/home.go:493 +0x650
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*program).Start
github.com/AdguardTeam/AdGuardHome/internal/home/service.go:45 +0xa4

goroutine 10 [chan receive, 2 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.memoryUsage.func1()
github.com/AdguardTeam/AdGuardHome/internal/home/memory.go:35 +0x64
created by github.com/AdguardTeam/AdGuardHome/internal/home.memoryUsage
github.com/AdguardTeam/AdGuardHome/internal/home/memory.go:33 +0xcc

goroutine 12 [syscall, 52 minutes]:

syscall.Syscall6(0x16, 0xa, 0x400050fc00, 0x7, 0xffffffffffffffff, 0x0, 0x0)
syscall/asm_linux_arm64.s:35 +0x10
golang.org/x/sys/unix.EpollWait(0xa, {0x400050fc00, 0x7, 0x7}, 0xffffffffffffffff)
golang.org/x/sys@v0.0.0-20211216021012-1d35b9e2eb4e/unix/zsyscall_linux_arm64.go:56 +0x64
github.com/fsnotify/fsnotify.(*fdPoller).wait(0x40003686a0)
github.com/fsnotify/fsnotify@v1.5.1/inotify_poller.go:87 +0x60
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0x40000d8050)
github.com/fsnotify/fsnotify@v1.5.1/inotify.go:193 +0x22c
created by github.com/fsnotify/fsnotify.NewWatcher
github.com/fsnotify/fsnotify@v1.5.1/inotify.go:60 +0x1c8

goroutine 13 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/aghos.(*osWatcher).handleErrors(0x40000e0b10)
github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:74 +0x130
created by github.com/AdguardTeam/AdGuardHome/internal/aghos.NewOSWritesWatcher
github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:63 +0x180

goroutine 14 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/aghos.(*osWatcher).handleEvents(0x40000e0b10)
github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:110 +0xf8
created by github.com/AdguardTeam/AdGuardHome/internal/aghos.NewOSWritesWatcher
github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:64 +0x1a0

goroutine 15 [select, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/aghnet.(*HostsContainer).handleEvents(0x40000f6b00)
github.com/AdguardTeam/AdGuardHome/internal/aghnet/hostscontainer.go:240 +0x144
created by github.com/AdguardTeam/AdGuardHome/internal/aghnet.NewHostsContainer
github.com/AdguardTeam/AdGuardHome/internal/aghnet/hostscontainer.go:185 +0x508

goroutine 16 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).handleHostsUpdates(0x21fc880)
github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:142 +0x70
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).Init
github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:137 +0x358

goroutine 33 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a69e030, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40000f6118, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40000f6100)
internal/poll/fd_unix.go:402 +0x1f4
net.(*netFD).accept(0x40000f6100)
net/fd_unix.go:173 +0x2c
net.(*TCPListener).accept(0x40000aa030)
net/tcpsock_posix.go:140 +0x2c
net.(*TCPListener).Accept(0x40000aa030)
net/tcpsock.go:262 +0x34
net/http.(*Server).Serve(0x400029e000, {0x1c92ee0, 0x40000aa030})
net/http/server.go:3002 +0x364
net/http.(*Server).ListenAndServe(0x400029e000)
net/http/server.go:2931 +0xb0
net/http.ListenAndServe(...)
net/http/server.go:3185
github.com/AdguardTeam/AdGuardHome/internal/home.run.func1(0x4000290000)
github.com/AdguardTeam/AdGuardHome/internal/home/home.go:429 +0xcc
created by github.com/AdguardTeam/AdGuardHome/internal/home.run
github.com/AdguardTeam/AdGuardHome/internal/home/home.go:427 +0xb50

goroutine 36 [chan receive, 12 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(*RDNS).workerLoop(0x40000d4640)
github.com/AdguardTeam/AdGuardHome/internal/home/rdns.go:122 +0x80
created by github.com/AdguardTeam/AdGuardHome/internal/home.NewRDNS
github.com/AdguardTeam/AdGuardHome/internal/home/rdns.go:60 +0x140

goroutine 37 [chan receive, 51 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(*WHOIS).workerLoop(0x400036d260)
github.com/AdguardTeam/AdGuardHome/internal/home/whois.go:249 +0x48
created by github.com/AdguardTeam/AdGuardHome/internal/home.initWHOIS
github.com/AdguardTeam/AdGuardHome/internal/home/whois.go:57 +0x140

goroutine 66 [sync.Cond.Wait, 52 minutes]:

sync.runtime_notifyListWait(0x40002900d0,` 0x0)
runtime/sema.go:513 +0x168
sync.(*Cond).Wait(0x40002900c0)
sync/cond.go:56 +0xe4
github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).tlsServerLoop(0x40004cc0e0)
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:252 +0x4f0
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:163 +0x70

goroutine 67 [IO wait, 2 minutes]:

internal/poll.runtime_pollWait(0x7f5a69df48, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x400055e018, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400055e000)
internal/poll/fd_unix.go:402 +0x1f4
net.(*netFD).accept(0x400055e000)
net/fd_unix.go:173 +0x2c
net.(*TCPListener).accept(0x40000aa018)
net/tcpsock_posix.go:140 +0x2c
net.(*TCPListener).Accept(0x40000aa018)
net/tcpsock.go:262 +0x34
net/http.(*Server).Serve(0x40004cc000, {0x1c92ee0, 0x40000aa018})
net/http/server.go:3002 +0x364
net/http.(*Server).ListenAndServe(0x40004cc000)
net/http/server.go:2931 +0xb0
github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start.func1(0x400048a1e0, 0x40004cc0e0)
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:183 +0x64
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:180 +0x318

goroutine 68 [IO wait, 52 minutes]:

internal/poll.runtime_pollWait(0x7f5a69de60, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x400055e098, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400055e080)
internal/poll/fd_unix.go:402 +0x1f4
net.(*netFD).accept(0x400055e080)
net/fd_unix.go:173 +0x2c
net.(*TCPListener).accept(0x40000aa048)
net/tcpsock_posix.go:140 +0x2c
net.(*TCPListener).Accept(0x40000aa048)
net/tcpsock.go:262 +0x34
net/http.(*Server).Serve(0x40004cc1c0, {0x1c92ee0, 0x40000aa048})
net/http/server.go:3002 +0x364
net/http.(*Server).ListenAndServe(0x40004cc1c0)
net/http/server.go:2931 +0xb0
github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).startBetaServer.func1(0x40004cc0e0)
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:216 +0x64
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).startBetaServer
github.com/AdguardTeam/AdGuardHome/internal/home/web.go:213 +0x274

goroutine 69 [sleep, 2 minutes]:

time.Sleep(0x8bb2c97000)
runtime/time.go:193 +0x140
github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).periodicUpdate(0x21fc880)
github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:264 +0x78
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*clientsContainer).Start
github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:155 +0x70

goroutine 70 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a69dc90, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40000f6198, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).ReadMsg(0x40000f6180, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28}, 0x0)
internal/poll/fd_unix.go:250 +0x230
net.(*netFD).readMsg(0x40000f6180, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28}, 0x0)
net/fd_posix.go:68 +0x64
net.(*UDPConn).readMsg(0x400000e4d8, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28})
net/udpsock_posix.go:62 +0x60
net.(*UDPConn).ReadMsgUDP(0x400000e4d8, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28})
net/udpsock.go:144 +0x60
github.com/AdguardTeam/dnsproxy/proxyutil.udpRead(0x400000e4d8, {0x4005997f51, 0xffff, 0xffff}, 0x28)
github.com/AdguardTeam/dnsproxy@v0.41.4/proxyutil/udp_unix.go:68 +0x74
github.com/AdguardTeam/dnsproxy/proxyutil.UDPRead(...)
github.com/AdguardTeam/dnsproxy@v0.41.4/proxyutil/udp.go:25
github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).udpPacketLoop(0x400010e380, 0x400000e4d8, {0x1c8f070, 0x400000e4b0})
github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server_udp.go:67 +0x23c
created by github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).startListeners
github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server.go:45 +0x190

goroutine 71 [IO wait, 1 minutes]:

internal/poll.runtime_pollWait(0x7f5a69dba8, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40000f6218, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40000f6200)
internal/poll/fd_unix.go:402 +0x1f4
net.(*netFD).accept(0x40000f6200)
net/fd_unix.go:173 +0x2c
net.(*TCPListener).accept(0x40000aa120)
net/tcpsock_posix.go:140 +0x2c
net.(*TCPListener).Accept(0x40000aa120)
net/tcpsock.go:262 +0x34
github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).tcpPacketLoop(0x400010e380, {0x1c92ee0, 0x40000aa120}, {0x89f7ca, 0x3}, {0x1c8f070, 0x400000e4b0})
github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server_tcp.go:58 +0x16c
created by github.com/AdguardTeam/dnsproxy/proxy.(*Proxy).startListeners
github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server.go:49 +0x4d8

goroutine 72 [chan receive, 51 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).filtersInitializer(0x40004d4000)
github.com/AdguardTeam/AdGuardHome/internal/filtering/filtering.go:338 +0x40
created by github.com/AdguardTeam/AdGuardHome/internal/filtering.(*DNSFilter).Start
github.com/AdguardTeam/AdGuardHome/internal/filtering/filtering.go:975 +0x7c

goroutine 73 [sleep, 51 minutes]:

time.Sleep(0x34630b8a000)
runtime/time.go:193 +0x140
github.com/AdguardTeam/AdGuardHome/internal/home.(*Filtering).periodicallyRefreshFilters(0x21fc928)
github.com/AdguardTeam/AdGuardHome/internal/home/filter.go:263 +0x3c
created by github.com/AdguardTeam/AdGuardHome/internal/home.(*Filtering).Start
github.com/AdguardTeam/AdGuardHome/internal/home/filter.go:52 +0x48

goroutine 74 [sleep]:

time.Sleep(0x3b9aca00)
runtime/time.go:193 +0x140
github.com/AdguardTeam/AdGuardHome/internal/stats.(*statsCtx).periodicFlush(0x40000b81a0)
github.com/AdguardTeam/AdGuardHome/internal/stats/unit.go:305 +0x7c
created by github.com/AdguardTeam/AdGuardHome/internal/stats.(*statsCtx).Start
github.com/AdguardTeam/AdGuardHome/internal/stats/unit.go:183 +0x48

goroutine 75 [chan receive, 51 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/querylog.(*queryLog).periodicRotate(0x40001b2200)
github.com/AdguardTeam/AdGuardHome/internal/querylog/querylogfile.go:150 +0xe0
created by github.com/AdguardTeam/AdGuardHome/internal/querylog.(*queryLog).Start
github.com/AdguardTeam/AdGuardHome/internal/querylog/qlog.go:107 +0x54

goroutine 57661 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a62f7e0, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40050ff898, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40050ff880, {0x4005c2c600, 0x10fe, 0x10fe})
internal/poll/fd_unix.go:167 +0x1e4
net.(*netFD).Read(0x40050ff880, {0x4005c2c600, 0x10fe, 0x10fe})
net/fd_posix.go:56 +0x44
net.(*conn).Read(0x400000e998, {0x4005c2c600, 0x10fe, 0x10fe})
net/net.go:183 +0x4c
crypto/tls.(*atLeastReader).Read(0x40057da030, {0x4005c2c600, 0x10fe, 0x10fe})
crypto/tls/conn.go:777 +0x58
bytes.(*Buffer).ReadFrom(0x4007375e78, {0x1c85400, 0x40057da030})
bytes/buffer.go:204 +0xa4
crypto/tls.(*Conn).readFromUntil(0x4007375c00, {0x1c864e0, 0x400000e998}, 0x5)
crypto/tls/conn.go:799 +0xe0
crypto/tls.(*Conn).readRecordOrCCS(0x4007375c00, 0x0)
crypto/tls/conn.go:606 +0xf4
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0x4007375c00, {0x4006669e0a, 0x2, 0x2})
crypto/tls/conn.go:1277 +0x164
io.ReadAtLeast({0x7f5a59fdc8, 0x4007375c00}, {0x4006669e0a, 0x2, 0x2}, 0x2)
io/io.go:328 +0xa8
io.ReadFull(...)
io/io.go:347
encoding/binary.Read({0x7f5a59fdc8, 0x4007375c00}, {0x1ca6950, 0x2757138}, {0x7722e0, 0x4006669e08})
encoding/binary/binary.go:166 +0x90
github.com/miekg/dns.(*Conn).ReadMsgHeader(0x400616fe38, 0x0)
github.com/miekg/dns@v1.1.45/client.go:294 +0x160
github.com/miekg/dns.(*Conn).ReadMsg(0x400616fe38)
github.com/miekg/dns@v1.1.45/client.go:249 +0x2c
github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).exchangeConn(0x40004b3710, {0x1ca8350, 0x4007375c00}, 0x4006965e60)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:99 +0x130
github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x4006965e60)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:61 +0x380
github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x4006965d40, 0x4005c50300)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54
created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

goroutine 46 [select, 51 minutes]:

github.com/patrickmn/go-cache.(*janitor).Run(0x40004ca5b0, 0x4007434680)
github.com/patrickmn/go-cache@v2.1.0+incompatible/cache.go:1079 +0x8c
created by github.com/patrickmn/go-cache.runJanitor
github.com/patrickmn/go-cache@v2.1.0+incompatible/cache.go:1099 +0xcc

goroutine 41 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a69dac0, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x400476b318, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400476b300, {0x4007450000, 0xfc4, 0xfc4})
internal/poll/fd_unix.go:167 +0x1e4
net.(*netFD).Read(0x400476b300, {0x4007450000, 0xfc4, 0xfc4})
net/fd_posix.go:56 +0x44
net.(*conn).Read(0x4007430000, {0x4007450000, 0xfc4, 0xfc4})
net/net.go:183 +0x4c
crypto/tls.(*atLeastReader).Read(0x4006fafe48, {0x4007450000, 0xfc4, 0xfc4})
crypto/tls/conn.go:777 +0x58
bytes.(*Buffer).ReadFrom(0x400010ecf8, {0x1c85400, 0x4006fafe48})
bytes/buffer.go:204 +0xa4
crypto/tls.(*Conn).readFromUntil(0x400010ea80, {0x1c864e0, 0x4007430000}, 0x5)
crypto/tls/conn.go:799 +0xe0
crypto/tls.(*Conn).readRecordOrCCS(0x400010ea80, 0x0)
crypto/tls/conn.go:606 +0xf4
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0x400010ea80, {0x4004810000, 0x1000, 0x1000})
crypto/tls/conn.go:1277 +0x164
bufio.(*Reader).Read(0x400480a360, {0x40004cc580, 0x9, 0x9})
bufio/bufio.go:227 +0x214
io.ReadAtLeast({0x1c85240, 0x400480a360}, {0x40004cc580, 0x9, 0x9}, 0x9)
io/io.go:328 +0xa8
io.ReadFull(...)
io/io.go:347
golang.org/x/net/http2.readFrameHeader({0x40004cc580, 0x9, 0x9}, {0x1c85240, 0x400480a360})
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:237 +0x5c
golang.org/x/net/http2.(*Framer).ReadFrame(0x40004cc540)
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:498 +0x90
golang.org/x/net/http2.(*clientConnReadLoop).run(0x40002c3fb0)
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:2101 +0x10c
golang.org/x/net/http2.(*ClientConn).readLoop(0x4000170a80)
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:1997 +0x58
created by golang.org/x/net/http2.(*Transport).newClientConn
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:725 +0xaa8

goroutine 57598 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a62eed0, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40050ff418, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40050ff400, {0x4008d51680, 0x205, 0x205})
internal/poll/fd_unix.go:167 +0x1e4
net.(*netFD).Read(0x40050ff400, {0x4008d51680, 0x205, 0x205})
net/fd_posix.go:56 +0x44
net.(*conn).Read(0x400000e970, {0x4008d51680, 0x205, 0x205})
net/net.go:183 +0x4c
crypto/tls.(*atLeastReader).Read(0x4006fafab8, {0x4008d51680, 0x205, 0x205})
crypto/tls/conn.go:777 +0x58
bytes.(*Buffer).ReadFrom(0x4007374cf8, {0x1c85400, 0x4006fafab8})
bytes/buffer.go:204 +0xa4
crypto/tls.(*Conn).readFromUntil(0x4007374a80, {0x1c864e0, 0x400000e970}, 0x5)
crypto/tls/conn.go:799 +0xe0
crypto/tls.(*Conn).readRecordOrCCS(0x4007374a80, 0x0)
crypto/tls/conn.go:606 +0xf4
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).readHandshake(0x4007374a80)
crypto/tls/conn.go:1009 +0x48
crypto/tls.(*Conn).clientHandshake(0x4007374a80, {0x1c9aee8, 0x40058ef900})
crypto/tls/handshake_client.go:179 +0x230
crypto/tls.(*Conn).handshakeContext(0x4007374a80, {0x1c9af20, 0x4000034068})
crypto/tls/conn.go:1445 +0x388
crypto/tls.(*Conn).HandshakeContext(...)
crypto/tls/conn.go:1395
crypto/tls.(*Conn).Handshake(...)
crypto/tls/conn.go:1379
github.com/AdguardTeam/dnsproxy/upstream.tlsDial(0x40000ec930, {0x89f7ca, 0x3}, 0x4000171080)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:111 +0x268
github.com/AdguardTeam/dnsproxy/upstream.(*TLSPool).Create(0x400742f1a0)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:75 +0x68
github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x40006245a0)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:71 +0x4a8
github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x4000624480, 0x40070ebe60)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54
created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

goroutine 57647 [runnable]:

net/http.(*connReader).backgroundRead(0x40052cf800)
net/http/server.go:671
created by net/http.(*connReader).startBackgroundRead
net/http/server.go:668 +0xc4

goroutine 57610 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a1d0bf0, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40052b9e18, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40052b9e00, {0x400930f680, 0x205, 0x205})
internal/poll/fd_unix.go:167 +0x1e4
net.(*netFD).Read(0x40052b9e00, {0x400930f680, 0x205, 0x205})
net/fd_posix.go:56 +0x44
net.(*conn).Read(0x400030e280, {0x400930f680, 0x205, 0x205})
net/net.go:183 +0x4c
crypto/tls.(*atLeastReader).Read(0x4007efa9f0, {0x400930f680, 0x205, 0x205})
crypto/tls/conn.go:777 +0x58
bytes.(*Buffer).ReadFrom(0x40001f5e78, {0x1c85400, 0x4007efa9f0})
bytes/buffer.go:204 +0xa4
crypto/tls.(*Conn).readFromUntil(0x40001f5c00, {0x1c864e0, 0x400030e280}, 0x5)
crypto/tls/conn.go:799 +0xe0
crypto/tls.(*Conn).readRecordOrCCS(0x40001f5c00, 0x0)
crypto/tls/conn.go:606 +0xf4
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).readHandshake(0x40001f5c00)
crypto/tls/conn.go:1009 +0x48
crypto/tls.(*Conn).clientHandshake(0x40001f5c00, {0x1c9aee8, 0x40054b6bc0})
crypto/tls/handshake_client.go:179 +0x230
crypto/tls.(*Conn).handshakeContext(0x40001f5c00, {0x1c9af20, 0x4000034068})
crypto/tls/conn.go:1445 +0x388
crypto/tls.(*Conn).HandshakeContext(...)
crypto/tls/conn.go:1395
crypto/tls.(*Conn).Handshake(...)
crypto/tls/conn.go:1379
github.com/AdguardTeam/dnsproxy/upstream.tlsDial(0x40000ec930, {0x89f7ca, 0x3}, 0x40070b4300)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:111 +0x268
github.com/AdguardTeam/dnsproxy/upstream.(*TLSPool).Create(0x400742f1a0)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:75 +0x68
github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x4006964a20)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:71 +0x4a8
github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x4006964900, 0x4006437860)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54
created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

goroutine 16304 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a1d0938, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40001b2518, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40001b2500, {0x4008b25900, 0x1107, 0x1107})
internal/poll/fd_unix.go:167 +0x1e4
net.(*netFD).Read(0x40001b2500, {0x4008b25900, 0x1107, 0x1107})
net/fd_posix.go:56 +0x44
net.(*conn).Read(0x400000e848, {0x4008b25900, 0x1107, 0x1107})
net/net.go:183 +0x4c
crypto/tls.(*atLeastReader).Read(0x4007efabe8, {0x4008b25900, 0x1107, 0x1107})
crypto/tls/conn.go:777 +0x58
bytes.(*Buffer).ReadFrom(0x40049e73f8, {0x1c85400, 0x4007efabe8})
bytes/buffer.go:204 +0xa4
crypto/tls.(*Conn).readFromUntil(0x40049e7180, {0x1c864e0, 0x400000e848}, 0x5)
crypto/tls/conn.go:799 +0xe0
crypto/tls.(*Conn).readRecordOrCCS(0x40049e7180, 0x0)
crypto/tls/conn.go:606 +0xf4
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0x40049e7180, {0x4005898000, 0x1000, 0x1000})
crypto/tls/conn.go:1277 +0x164
bufio.(*Reader).Read(0x400480a1e0, {0x4004d863c0, 0x9, 0x9})
bufio/bufio.go:227 +0x214
io.ReadAtLeast({0x1c85240, 0x400480a1e0}, {0x4004d863c0, 0x9, 0x9}, 0x9)
io/io.go:328 +0xa8
io.ReadFull(...)
io/io.go:347
golang.org/x/net/http2.readFrameHeader({0x4004d863c0, 0x9, 0x9}, {0x1c85240, 0x400480a1e0})
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:237 +0x5c
golang.org/x/net/http2.(*Framer).ReadFrame(0x4004d86380)
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:498 +0x90
golang.org/x/net/http2.(*clientConnReadLoop).run(0x4004946fb0)
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:2101 +0x10c
golang.org/x/net/http2.(*ClientConn).readLoop(0x400537c780)
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:1997 +0x58
created by golang.org/x/net/http2.(*Transport).newClientConn
golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:725 +0xaa8

goroutine 57586 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a62ec18, 0x72)
runtime/netpoll.go:234 +0xa4
internal/poll.(*pollDesc).wait(0x40050fef18, 0x72, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x38
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40050fef00, {0x4004ef1300, 0x10fe, 0x10fe})
internal/poll/fd_unix.go:167 +0x1e4
net.(*netFD).Read(0x40050fef00, {0x4004ef1300, 0x10fe, 0x10fe})
net/fd_posix.go:56 +0x44
net.(*conn).Read(0x4007430618, {0x4004ef1300, 0x10fe, 0x10fe})
net/net.go:183 +0x4c
crypto/tls.(*atLeastReader).Read(0x4007efa2a0, {0x4004ef1300, 0x10fe, 0x10fe})
crypto/tls/conn.go:777 +0x58
bytes.(*Buffer).ReadFrom(0x4008d7b3f8, {0x1c85400, 0x4007efa2a0})
bytes/buffer.go:204 +0xa4
crypto/tls.(*Conn).readFromUntil(0x4008d7b180, {0x1c864e0, 0x4007430618}, 0x5)
crypto/tls/conn.go:799 +0xe0
crypto/tls.(*Conn).readRecordOrCCS(0x4008d7b180, 0x0)
crypto/tls/conn.go:606 +0xf4
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0x4008d7b180, {0x4006765d02, 0x2, 0x2})
crypto/tls/conn.go:1277 +0x164
io.ReadAtLeast({0x7f5a59fdc8, 0x4008d7b180}, {0x4006765d02, 0x2, 0x2}, 0x2)
io/io.go:328 +0xa8
io.ReadFull(...)
io/io.go:347
encoding/binary.Read({0x7f5a59fdc8, 0x4008d7b180}, {0x1ca6950, 0x2757138}, {0x7722e0, 0x4006765d00})
encoding/binary/binary.go:166 +0x90
github.com/miekg/dns.(*Conn).ReadMsgHeader(0x400052fe38, 0x0)
github.com/miekg/dns@v1.1.45/client.go:294 +0x160
github.com/miekg/dns.(*Conn).ReadMsg(0x400052fe38)
github.com/miekg/dns@v1.1.45/client.go:249 +0x2c
github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).exchangeConn(0x40004b3710, {0x1ca8350, 0x4008d7b180}, 0x400560def0)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:99 +0x130
github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x400560def0)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:61 +0x380
github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x400560ddd0, 0x40070eb9e0)
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54
created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel
github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

Bumping this up. This needs visibility.

Definitely.

@timkgh
Copy link
Author

timkgh commented Apr 6, 2022

Could it also have something to do with mmap'ing on these architectures?

@timkgh
Copy link
Author

timkgh commented Apr 6, 2022

FWIW I've been running with cache_optimistic: false for a few days now and it seems stable.
I'll turn cache_optimistic: true back on to try to narrow it down.

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

TBH, What I think is needed is the adding the flag --no-mem-optimization to your init.d arguments.

@timkgh
Copy link
Author

timkgh commented Apr 6, 2022

TBH, What I think is needed is the adding the flag --no-mem-optimization to your init.d arguments.

Can someone please explain what it really does?

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

TBH, What I think is needed is the adding the flag --no-mem-optimization to your init.d arguments.

Can someone please explain what it really does?

I am under impression that AdGuardHome attempts to optimize its memory usage in some way or shape. Maybe it is not properly doing such, or incompatible. But like you have asked, it would definitely be better to understand what this option is doing.

@EugeneOne1
Copy link
Member

@jumpsmm7, have you faced a similar problem? There are no extraordinary memory allocations in the provided profiling information actually. The most memory-hungry processes are the filter updates. How many filter rules have you got?

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

@EugeneOne1 I only had actually 1 list I was using which was OISD full.

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

@EugeneOne1

@jumpsmm7, have you faced a similar problem? There are no extraordinary memory allocations in the provided profiling information actually. The most memory-hungry processes are the filter updates. How many filter rules have you got?

The problem we are actually running into, over a couple of days of normal use, this memory consumption increases and never releases. Basically, if adguardhome updates the filter list, each time it does, it holds on to more and more memory. It does not appear to be letting go of the old memory before performing filter list updates. Maybe there is some sort of memory leak issue that needs to be looked into, particularly in regards to the filter update processing versus when adguardhome purges its current memory holdups. This issue seems to disappear if the cache is disabled with cache_optimistic: false. Definitely something is going on.

@timkgh
Copy link
Author

timkgh commented Apr 6, 2022

@EugeneOne1 Can you please explain what --no-mem-optimization does and whether it can affect certain architectures in a negative way?

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

@EugeneOne1 Can you please explain what --no-mem-optimization does and whether it can affect certain architectures in a negative way?

In our case we would want to know if adding it may benefit. It appears the OP is not passing this as an argument when launching adguardhome.

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Apr 6, 2022

@timkgh
I found the info myself
b3a68bb
and here
#2044

@SiNONiMiTY
Copy link

SiNONiMiTY commented Apr 6, 2022

been running AGH with the --no-mem-optimization argument and memory usage is consistent

maybe there is something wrong with the garbage collection

You can check my previous post for the memory usage without the argument

This is the current consumption now, and it is consistent at that range.
image

@EugeneOne1
Copy link
Member

@SiNONiMiTY, well that's interesting. @timkgh, @jumpsmm7, could you also please try to run AGH with --no-mem-optimization flag?

@EugeneOne1
Copy link
Member

@jumpsmm7, AFAIK, it performs some runtime hacks to improve memory usage, but it seems obsolete honestly.

@trinib
Copy link

trinib commented May 22, 2022

@jumpsmm7 @SiNONiMiTY @EugeneOne1 After 7 hours just doing nothing IT STILL SHOWS HIGH MEMORY on vps linux 1gb ram. And AGH sticks on "DNS server starting up"

image

@trinib
Copy link

trinib commented May 22, 2022

hmmm I added blocklist with fewer URLs(still alot) and after reboot it remains around 35% ram and AGH starts up and it's stable. So I guess it is AGH blocklist url part cause memory issue🤷‍♂️

image

@jumpsmm7
Copy link
Contributor

@trinib do me a favor, once you get the memory usage high, try using a different program that also demands alot of memory, then tell me if adguardhome starts releasing memory. From my understanding, this is a behavior of Go itself. GO does not free memory until it is taken back.

@trinib
Copy link

trinib commented May 23, 2022

@jumpsmm7 It does goes down running memory stress test but it rises right back up after stopping it.

explorer_yR1r8LTdxb.mp4

@trinib
Copy link

trinib commented May 23, 2022

I would like to add, from a clean install I used backup yaml file with all the blocklist URLs links instead of python script, rebooted pi and memory still went high. I deleted all the URLs in yaml without a clean install, rebooted pi and AGH memory was very low again.

Why memory gets & stays so high when processing lots of blocklist filters or logs filling up from days/weeks of build up. 🤔🤔🤔

@jumpsmm7
Copy link
Contributor

@trinib how much of the memory consumption is cache memory and how much of it is actually being used?

@trinib
Copy link

trinib commented May 23, 2022

@trinib how much of the memory consumption is cache memory and how much of it is actually being used?

@jumpsmm7 Use command free ?

@trinib
Copy link

trinib commented May 23, 2022

image

@jumpsmm7
Copy link
Contributor

jumpsmm7 commented May 24, 2022

@trinib

Here is mine running on a router that uses busybox.

image

and the arch

image

AdGuardHome at the bottom of my htop

image

@trinib
Copy link

trinib commented May 24, 2022

@trinib

Here is mine running on a router that uses busybox.

image

and the arch

image

AdGuardHome at the bottom of my htop

image

I do not understand . what this suppose to mean? Did you do something to get lower memory ? did you ran script with all the URLs ? Cause correct me if i am wrong i see 1gb of ram and 829 in use and AGH is 26% ? please tell me you ran script with those results . if so what tweak you did ?

@ainar-g
Copy link
Contributor

ainar-g commented May 24, 2022

@trinib, thank you for the script!

@EugeneOne1, please investigate using the script.

@jumpsmm7
Copy link
Contributor

@trinib Here is me using your script. (this is durring and after.)
image
image
image
image
image
image
image
image
image

@trinib
Copy link

trinib commented May 25, 2022

@jumpsmm7 do you have any thing change in AGH ? You should check out sharex.

@jumpsmm7
Copy link
Contributor

@jumpsmm7 do you have any thing change in AGH ? You should check out sharex.

Nothing out the ordinary other than a large memory usage from a high amount of filters having been loaded into memory from adguard home

@jumpsmm7
Copy link
Contributor

Adguardhome seems to struggle to know how to commit memory in the presence of this many filters having been added.

@trinib
Copy link

trinib commented May 25, 2022

Adguardhome seems to struggle to know how to commit memory in the presence of this many filters having been added.

hopefully it can help point out quicker by adding many urls to see where AGH has memory issue

@cxw620
Copy link

cxw620 commented Jul 1, 2022

Well it seems I meet with the same problem.
I have about 30 filters included some official ones. My 2GB-MEM VPS crashed when updating them, though my 4GB-MEM VPS with same configuration works well. And I also enable optimistic caching.
Is there any further solutions now?

@jkellerer
Copy link

Could it be related to GC in go taking too much memory on small devices when not limited? Had AGH pile up to 1.5G on a 4G VM until I limited it to 384m (using docker) now it runs fine at 350m per average.

Go itself is meant to introduce a soft limit via ENV GOMEMLIMIT in 1.19, until this is released ulimit (e.g. via docker) can potentially help.

@ainar-g
Copy link
Contributor

ainar-g commented Oct 6, 2022

@timkgh, is this still an issue? Because we could never really reproduce this.

@ainar-g ainar-g added cannot reproduce and removed needs investigation Needs to be reproduced reliably. labels Oct 6, 2022
@jumpsmm7
Copy link
Contributor

jumpsmm7 commented Oct 6, 2022

The only thing I can say when using an embedded or small device with a small amount of memory, you definitely need a swap and a good regard for how many filters/and or lists you use. If you expect to have alot of filters, then expect to have a swap incase there is the opportunity for an out of memory occurrence. If on a VM, you need to allocate an adequate amount of memory if you expect to be loading alot of filters. All of these types of blocklists are loading into memory. The more filters or block domains, the more caution needs to be taken in regards to memory needed. The memory doesn't fill indefinitely, however it may appear that it does when using memory stores that are inadequate in size for the amount of lists being used. This is not something that can be controlled; however, it relies on the user's good judgement and understanding of the system from which they operate.

@timkgh
Copy link
Author

timkgh commented Oct 6, 2022

@timkgh, is this still an issue? Because we could never really reproduce this.

I turned off optimistic caching because of other issues (e.g. Chrome can cache stale DNS responses for 1 min which is annoying) and I haven't seen any memory problems.

We can probably close this issue.

@ainar-g ainar-g closed this as not planned Won't fix, can't repro, duplicate, stale Oct 6, 2022
heyxkhoa pushed a commit to heyxkhoa/AdGuardHome that referenced this issue Mar 20, 2023
Merge in DNS/adguard-home from 4437-rm-mem-opt to master

Updates AdguardTeam#4437.
Updates AdguardTeam#2044.

Squashed commit of the following:

commit d1e5520
Merge: 73a6b49 8bb9546
Author: Eugene Burkov <E.Burkov@AdGuard.COM>
Date:   Wed Apr 6 19:37:09 2022 +0300

    Merge branch 'master' into 4437-rm-mem-opt

commit 73a6b49
Author: Eugene Burkov <E.Burkov@AdGuard.COM>
Date:   Wed Apr 6 18:33:23 2022 +0300

    all: imp log of changes

commit a62efcd
Author: Eugene Burkov <E.Burkov@AdGuard.COM>
Date:   Wed Apr 6 18:27:42 2022 +0300

    home: depr memory opt
heyxkhoa pushed a commit to heyxkhoa/AdGuardHome that referenced this issue Mar 20, 2023
Merge in DNS/adguard-home from imp-help to master

Updates AdguardTeam#4437.

Squashed commit of the following:

commit 941338b
Author: Eugene Burkov <E.Burkov@AdGuard.COM>
Date:   Thu Apr 7 13:59:55 2022 +0300

    home: imp help output
@ainar-g ainar-g removed the waiting for data Waiting for users to provide more data. label Aug 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants