-
-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not filter tasks before gathering data #6371
Conversation
distributed/tests/test_worker.py
Outdated
@pytest.mark.parametrize("close_worker", [False, True]) | ||
@pytest.mark.parametrize( | ||
"close_worker", [False, pytest.param(True, marks=pytest.mark.slow)] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(cancelled, True) now takes 5s instead of 100ms as now the network comms is fired blindly.
(resumed, True) was already taking 5s before this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that related to #6354 at all? Because Scheduler.remove_worker
doesn't flush or await the BatchedSend
, so after remove_worker
returns, there's still some delay until it receives the message and actually shuts down? 5s seems longer than I'd expect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting this traceback:
File "/home/crusaderky/github/distributed/distributed/worker.py", line 4575, in _get_data
comm = await rpc.connect(worker)
File "/home/crusaderky/github/distributed/distributed/core.py", line 1184, in connect
return await connect_attempt
File "/home/crusaderky/github/distributed/distributed/core.py", line 1120, in _connect
comm = await connect(
File "/home/crusaderky/github/distributed/distributed/comm/core.py", line 315, in connect
raise OSError(
OSError: Timed out trying to connect to tcp://127.0.0.1:34011 after 5 s
There seems to be nothing, when Worker.close() is invoked, that explicitly shuts down the RPC channel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
distributed/distributed/worker.py
Line 1568 in 33fc50c
await self.rpc.close() |
maybe this just happens too late?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving to #6409
for key in to_gather_keys: | ||
ts = self.tasks.get(key) | ||
if ts is None: | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should never happen. The finally
clause of gather_dep strongly states this by using an unguarded access self.tasks[key]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we worked very hard to ensure tasks are not accidentally forgotten. I encourage being as strict as possible with this. A KeyError is often a sign of a messed up transition somewhere else
stop: float, | ||
data: dict[str, Any], | ||
cause: TaskState, | ||
worker: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
off topic: post refactor, should this method move to the state machine class, or stay in Worker proper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's network and diagnostics related so I'm inclined to say this does not belong to the state machine class
if ts.state == "cancelled": | ||
recommendations[ts] = "released" | ||
else: | ||
recommendations[ts] = "fetch" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ts.state == "memory"
distributed/worker.py
Outdated
recommendations[ts] = "released" | ||
else: | ||
recommendations[ts] = "fetch" | ||
if ts.state == "cancelled": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tests ts.state a lot later than before. There's a new test in this PR to verify this is works for tasks that transition during the comms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was the prior behavior just an optimization to avoid fetching keys that were cancelled in the handle_instructions to gather_dep interstice? Before, we avoided fetching them; now we don't? Seems like a nice thing to add back eventually, but the simplification here is nice.
I'm curious how much a blocked event loop #6325 would make this scenario more likely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The prior behaviour was introduced by #5426 as a response to a deadlock.
Before #5426 you had two use cases:
a1. cancelled during comms
a2. cancelled task is received and implicitly transitioned to released
b1. cancelled in the interstice
b2. cancelled task is not fetched
b3. deadlock
After #5426:
a1. cancelled during comms
a2. cancelled task is received and implicitly transitioned to released
b1. cancelled in the interstice
b2. cancelled task is explicitly transitioned to released
After this PR:
- cancelled whenever
- cancelled task is explicitly transitioned to released
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really don't think we should care about performance optimizations in this case. Transitions from flight to cancelled should not be that frequent to begin with?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we should remove this optimization if possible. It's not worth it and it didn't feel great to introduce it in the first place.
By now, I trust tests around these edge cases enough that if all is green after removal, we're good to go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall seems good, I appreciate the simplification.
distributed/worker.py
Outdated
recommendations[ts] = "released" | ||
else: | ||
recommendations[ts] = "fetch" | ||
if ts.state == "cancelled": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was the prior behavior just an optimization to avoid fetching keys that were cancelled in the handle_instructions to gather_dep interstice? Before, we avoided fetching them; now we don't? Seems like a nice thing to add back eventually, but the simplification here is nice.
I'm curious how much a blocked event loop #6325 would make this scenario more likely.
distributed/tests/test_worker.py
Outdated
@pytest.mark.parametrize("close_worker", [False, True]) | ||
@pytest.mark.parametrize( | ||
"close_worker", [False, pytest.param(True, marks=pytest.mark.slow)] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that related to #6354 at all? Because Scheduler.remove_worker
doesn't flush or await the BatchedSend
, so after remove_worker
returns, there's still some delay until it receives the message and actually shuts down? 5s seems longer than I'd expect.
Co-authored-by: Gabe Joseph <gjoseph92@gmail.com>
Unit Test Results 15 files ± 0 15 suites ±0 7h 14m 38s ⏱️ + 9m 49s For more details on these failures, see this check. Results for commit 6f9caed. ± Comparison against base commit 33fc50c. ♻️ This comment has been updated with latest results. |
distributed/worker.py
Outdated
typically the next to be executed but since we're fetching tasks for potentially | ||
many dependents, an exact match is not possible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI this entire "get_cause" thing is necessary for acquire_replica
where there is not necessarily a dependent known to the worker. It's not about the ambiguity of having multiple dependents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a note about acquire-replicas
stop: float, | ||
data: dict[str, Any], | ||
cause: TaskState, | ||
worker: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's network and diagnostics related so I'm inclined to say this does not belong to the state machine class
distributed/worker.py
Outdated
recommendations[ts] = "released" | ||
else: | ||
recommendations[ts] = "fetch" | ||
if ts.state == "cancelled": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we should remove this special treatment. The bigger point of the transition system was to simplify these kind of clauses and allow us to make a recommendation without investigating start states. This did not work well all the time but in this case it works flawlessly and reduces complexity as intended.
I like the original transition log also better because a successful fetch should recommend a transition to memory. However, the state machine decides to forget instead because it knows the history and knows that the key was cancelled. This is much more in line with how I would envision this system to work.
diff --git a/distributed/tests/test_cancelled_state.py b/distributed/tests/test_cancelled_state.py
index cab21a5c..74a039b7 100644
--- a/distributed/tests/test_cancelled_state.py
+++ b/distributed/tests/test_cancelled_state.py
@@ -322,10 +322,7 @@ async def test_in_flight_lost_after_resumed(c, s, b):
("free-keys", (fut1.key,)),
(fut1.key, "resumed", "released", "cancelled", {}),
# After gather_dep receives the data, the task is forgotten
- ("receive-dep", a.address, {fut1.key}),
- (fut1.key, "release-key"),
- (fut1.key, "cancelled", "released", "released", {fut1.key: "forgotten"}),
- (fut1.key, "released", "forgotten", "forgotten", {}),
+ (fut1.key, "cancelled", "memory", "released", {fut1.key: "forgotten"}),
],
)
diff --git a/distributed/worker.py b/distributed/worker.py
index cc2ea229..3f6319fa 100644
--- a/distributed/worker.py
+++ b/distributed/worker.py
@@ -3333,9 +3333,7 @@ class Worker(ServerNode):
for d in self.in_flight_workers.pop(worker):
ts = self.tasks[d]
ts.done = True
- if ts.state == "cancelled":
- recommendations[ts] = "released"
- elif d in data:
+ if d in data:
recommendations[ts] = ("memory", data[d])
elif busy:
recommendations[ts] = "fetch"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very happy to apply the patch if it doesn't deadlock elsewhere 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify: if a asks b for x, but either b responds that it doesn't have a replica or doesn't respond at all, and in the meantime the scheduler cancels x on a, this will trigger a cancelled->fetch transition. Is this the desired behaviour?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Desired behavior is probably a bit much. It will do the right thing because we'll have
ts.done = True
cancelled -> fetch
which will recommend a cancelled->release so we're good.
I fully admit that the ts.done
attribute in this case is very awkward. It basically encodes that this fetch transition originates from either the gather_dep result or from the execute result. Therefore, we could just as well remove the ts.done
attribute and deal with these transition in the gather_dep/execute result the way you are proposing in this PR. When I introduced this (many months ago) I felt this would reduce code complexity (as in having fewer conditionals).
Given that we still have the ts.done
attribute, I believe the patch I am proposing is the more idiomatic way but I'm happy to revisit this in a later iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also be in favor of removing ts.done
eventually and having the logic in gather_dep
and execute
like here. Or maybe it's just a naming issue—ts.done
is a pretty generic/ambiguous term. But I think from a #5736 perspective, having this extra piece of state (done
) that affects the behavior of transitions makes things harder to reason about. Though I do appreciate that it protects you from forgetting about these edge cases and having to check whether ts.state == "cancelled"
.
Maybe this is over the top, but what if done
was a state? Call it fetched
and executed
, since they might need different logic and I don't like overlapping the states of execution vs fetching anyway. Then you'd have different transition handlers for flight->fetched vs cancelled->fetched. Forgetting to handle the cancelled possibility would be an impossible transition error, instead of a bug and maybe deadlock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think "fetched" or "executed" is a good idea - I'd rather look into moving away from intermediate states, not adding more.
distributed/worker.py
Outdated
recommendations[ts] = "released" | ||
else: | ||
recommendations[ts] = "fetch" | ||
if ts.state == "cancelled": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we should remove this optimization if possible. It's not worth it and it didn't feel great to introduce it in the first place.
By now, I trust tests around these edge cases enough that if all is green after removal, we're good to go
Simplify handling of cancelled state
I changed the title of the PR to have a better changelog since this is not just simplifying things but also a behavioral change |
Failing tests are known offenders |
await wait_for_state("x", "flight", a) | ||
a.update_data({"x": 3}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if x is already in flight, how could the data end up in memory without us doing it explicitly like in this unit test?
acquire_replica and "fetch_dependency" should not fetch this key a second time.
From me reading the code, the only way this could happen is via Client.scatter
. I would argue a user should not be allowed to scatter a key that is already known to the cluster to be computed.
I don't want to block this PR for this test but if the above outline is how we end up in this situation, I think we should prohibit scattering such keys and shrink the space of possible/allowed transitions.
Specifically, I'm inclined to say a.update({"x": 3})
should raise an exception if x is in flight.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specifically, I'm inclined to say a.update({"x": 3}) should raise an exception if x is in flight.
would might translate to something like
def transition_flight_memory(...):
if not ts.done:
raise ImpossibleTransition("A nice exception that tells us that we cannot move data to memory while in flight but coro/task still running")
(Where the exception is supposed to be raised is not the point of my argument. It may not be feasible to raise in the transition itself, idk)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The more I think about this the stronger I feel about it because these kind of race conditions are part of why I introduced cancelled/resumed to avoid us needing to deal with these transitions.
If the fetch task would finish successfully, this would cause a memory->memory transition. Since this is not allowed/possible this would cause a
memory->released
(possibly the released transition would cancel some follow up tasks)
released->memory
or as a concrete story
[
(ts.key, "flight", "memory", "memory", {dependent: "executing"})
(dependent.key, "waiting", "executing", "executing", {}),
# A bit later after gather_dep returns
(ts.key, "memory", "memory", "released", {dependent: "released"}),
(dependent.key, "executing", "released", "cancelled", {}),
(ts.key, "released", "memory", "memory", {dependent: "waiting"}),
(dependent.key, "cancelled", "waiting", "executing", {}),
]
writing down the expected story made me realize that our transition flow should heal us here but we'd be performing a lot of unnecessary transitions that could expose us to problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that 80% of the problem is caused by the non-sequentiality of RPC calls vs. bulk comms.
- client scatters to a
- the scheduler does not know about scattered keys until the three-way round-trip between client, workers, and scheduler has been completed:
distributed/distributed/scheduler.py
Lines 5018 to 5022 in fb3589c
keys, who_has, nbytes = await scatter_to_workers( nthreads, data, rpc=self.rpc, report=False ) self.update_data(who_has=who_has, nbytes=nbytes, client=client) - in the middle of that handshake, a client (not necessarily the same client) calls compute on b and then gather_dep to copy the key from b to a
- while the flight from b to a is in progress, the scatter finishes, which triggers update_data as shown in the test.
The only way to avoid this would be to fundamentally rewrite the scatter implementation. Which, for the record, I think is long overdue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll explain the above in a comment in the test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point is only partially about technical correctness of race conditions but also about whether this is even a sane operation. How can a user know the value of x
if x
is supposed to be computed on the cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good to go. The question about the test case is something that should inform a possible follow up and should not block this PR imo
ensure_communicating
transitions to new WorkerState event mechanism #5896