Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] v1 autoscaler consistency fix #40488

Closed
wants to merge 36 commits into from

Conversation

vitsai
Copy link
Contributor

@vitsai vitsai commented Oct 19, 2023

A less invasive way to do it.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

vitsai added 13 commits October 11, 2023 08:48
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
@vitsai vitsai changed the base branch from report-usage-v2 to master October 19, 2023 09:20
@vitsai vitsai changed the base branch from master to report-usage-v2 October 19, 2023 09:20
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
@@ -188,15 +190,15 @@ void GcsResourceManager::HandleGetAllResourceUsage(
rpc::GetAllResourceUsageRequest request,
rpc::GetAllResourceUsageReply *reply,
rpc::SendReplyCallback send_reply_callback) {
if (!node_resource_usages_.empty()) {
if (!gcs_autoscaler_state_manager_.GetNodeResourceInfo().empty()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should just pull the GcsPlacementGroupManager for the current pg load? Instead of relying on the periodic runner?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a bigger refactor beyond the scope of this change

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If GcsAutoscalerStateManager exposes a GetPlacementGroupLoad function (which just gets the load from the PG manager) for the below:

auto placement_group_load = gcs_placement_group_manager_.GetPlacementGroupLoad();

Then we could remove the dependency between GcsResourceManager and GcsPlacementGroupManager? So we don't need changes here: https://github.com/ray-project/ray/pull/40488/files#diff-f33835748a6d386dd44a3449d9299117799bd9b552655054f901e42c5fbb59a1R253-R262 ?

This also unifies v1/v2 for how PG load is populated with a potentially smaller change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The periodic function also schedules pending placement groups, it doesn't just update the info

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we already removed the dependency in the current change

Comment on lines 261 to 262
gcs_resource_manager_->UpdatePlacementGroupLoad(
gcs_placement_group_manager_->GetPlacementGroupLoad());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove placement_group_load in gcs_resource_manager since it will get all loads from gcs autoscaler manager now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also do that this way: #40254

Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
@vitsai vitsai changed the base branch from report-usage-v2 to master October 30, 2023 16:45
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
@vitsai vitsai mentioned this pull request Oct 30, 2023
8 tasks
vitsai added 10 commits October 30, 2023 21:01
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Signed-off-by: vitsai <vitsai@cs.stanford.edu>
Copy link

stale bot commented Dec 15, 2023

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

  • If you'd like to keep this open, just leave any comment, and the stale label will be removed.

@stale stale bot added the stale The issue is stale. It will be closed within 7 days unless there are further conversation label Dec 15, 2023
@DmitriGekhtman
Copy link
Contributor

Is this a viable approach to resolving the issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale The issue is stale. It will be closed within 7 days unless there are further conversation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants