Skip to content
This repository has been archived by the owner on Jun 19, 2022. It is now read-only.

Retry pod memory usage grows without bound #1102

Closed
grantr opened this issue May 20, 2020 · 1 comment
Closed

Retry pod memory usage grows without bound #1102

grantr opened this issue May 20, 2020 · 1 comment
Labels
area/broker kind/bug Something isn't working priority/1 Blocks current release defined by release/* label or blocks current milestone release/1 storypoint/3
Milestone

Comments

@grantr
Copy link
Contributor

grantr commented May 20, 2020

Describe the bug
As mentioned in #876, when the retry service has consumers with long timeouts, its memory usage grows until evicted.

Expected behavior
Retry service memory usage remains stable as load increases.

To Reproduce
#876 (comment)

Additional context
This is likely caused by max outstanding bytes being set too high. With hundreds of triggers each consuming messages slower than they're coming in, eventually the retry service will use max outstanding bytes * number of triggers. We can reduce the max outstanding bytes/messages as suggested in #876.

@grantr grantr added kind/bug Something isn't working area/broker priority/1 Blocks current release defined by release/* label or blocks current milestone release/1 labels May 20, 2020
@grantr grantr added this to the Backlog milestone May 20, 2020
@yolocs
Copy link
Member

yolocs commented May 27, 2020

Resolved from the PR

@yolocs yolocs closed this as completed May 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/broker kind/bug Something isn't working priority/1 Blocks current release defined by release/* label or blocks current milestone release/1 storypoint/3
Projects
None yet
Development

No branches or pull requests

2 participants