Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tetragon_process_cache_size metric #2827

Merged
merged 1 commit into from
Aug 23, 2024
Merged

Conversation

lambdanis
Copy link
Contributor

tetragon_process_cache_size metric was increased on every add() and never decreased. This is incorrect - fix it, so that it's increased on add() only if there was no LRU eviction and it's decreased on remove().

tetragon_process_cache_size metric was increased on every add() and never
decreased. This is incorrect - fix it, so that it's increased on add() only if
there was no LRU eviction and it's decreased on remove().

Signed-off-by: Anna Kapuscinska <anna@isovalent.com>
@lambdanis lambdanis added area/metrics Related to prometheus metrics release-note/bug This PR fixes an issue in a previous release of Tetragon. labels Aug 20, 2024
@lambdanis lambdanis requested a review from a team as a code owner August 20, 2024 20:39
@lambdanis lambdanis requested a review from olsajiri August 20, 2024 20:39
@@ -121,8 +121,9 @@ func (pc *Cache) refInc(p *ProcessInternal) {
atomic.AddUint32(&p.refcnt, 1)
}

func (pc *Cache) Purge() {
func (pc *Cache) purge() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's another Purge instance in here..

pc.cache.Purge()
but not sure it's needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flow is: purge() sends to stopChan, then cacheGarbageCollector() reads from this channel and actually purges the inner lru.Cache (the line you pointed). So metric reset happens in a different goroutine than actual cleanup, but it should be fine.

@lambdanis lambdanis merged commit 9e35cba into main Aug 23, 2024
42 checks passed
@lambdanis lambdanis deleted the pr/lambdanis/fix-cache-size branch August 23, 2024 19:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/metrics Related to prometheus metrics release-note/bug This PR fixes an issue in a previous release of Tetragon.
Projects
Development

Successfully merging this pull request may close these issues.

2 participants