Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve scheduler memory usage #8144

Merged
merged 2 commits into from
Aug 9, 2024

Conversation

pierDipi
Copy link
Member

@pierDipi pierDipi commented Aug 9, 2024

  • Create a namespaced-scoped statefulset lister instead of being cluster-wide
  • Accept a PodLister rather than creating a cluster-wide one

Fixes #

Proposed Changes

  • Improve scheduler memory usage

Pre-review Checklist

  • At least 80% unit test coverage
  • E2E tests for any new behavior
  • Docs PR for any user-facing impact
  • Spec PR for any new API feature
  • Conformance test for any change to the spec

@pierDipi pierDipi requested a review from Cali0707 August 9, 2024 08:41
@knative-prow knative-prow bot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 9, 2024
@pierDipi pierDipi requested a review from creydr August 9, 2024 08:41
- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
@pierDipi pierDipi force-pushed the improve-scheduler-memory-usage branch from 4b17f61 to c3d3b78 Compare August 9, 2024 09:22
Copy link

knative-prow bot commented Aug 9, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pierDipi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

codecov bot commented Aug 9, 2024

Codecov Report

Attention: Patch coverage is 80.95238% with 4 lines in your changes missing coverage. Please review.

Project coverage is 67.91%. Comparing base (ecb6c01) to head (22bfaa9).
Report is 2 commits behind head on main.

Files Patch % Lines
pkg/scheduler/statefulset/scheduler.go 80.95% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8144      +/-   ##
==========================================
+ Coverage   67.89%   67.91%   +0.01%     
==========================================
  Files         368      368              
  Lines       17571    17581      +10     
==========================================
+ Hits        11930    11940      +10     
  Misses       4893     4893              
  Partials      748      748              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
@pierDipi pierDipi force-pushed the improve-scheduler-memory-usage branch from 775f379 to 22bfaa9 Compare August 9, 2024 11:08
@knative-prow knative-prow bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 9, 2024
Copy link
Member

@creydr creydr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@knative-prow knative-prow bot added the lgtm Indicates that a PR is ready to be merged. label Aug 9, 2024
@knative-prow knative-prow bot merged commit d69b8b4 into knative:main Aug 9, 2024
36 checks passed
pierDipi added a commit to pierDipi/eventing that referenced this pull request Sep 23, 2024
* Improve scheduler memory usage

- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Update codegen

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
pierDipi added a commit to pierDipi/eventing that referenced this pull request Sep 23, 2024
* Improve scheduler memory usage

- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Update codegen

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
knative-prow bot pushed a commit that referenced this pull request Sep 23, 2024
…its to speed up recovery time (#8202)

* Improve scheduler memory usage (#8144)

* Improve scheduler memory usage

- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Update codegen

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Remove scheduler `wait`s to speed up recovery time (#8200)

Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
knative-prow bot pushed a commit that referenced this pull request Sep 23, 2024
…its to speed up recovery time (#8203)

* Improve scheduler memory usage (#8144)

* Improve scheduler memory usage

- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Update codegen

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Remove scheduler `wait`s to speed up recovery time (#8200)

Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants