-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use cluster.scheduler_info rather than cluster.scheduler #73
Conversation
Supercedes dask#72 This depends on dask/distributed#2902 , which adds a `Cluster.scheduler_info` attribute to clusters which holds necessary scheduler information. We prefer this over querying a Scheduler object directly in case that scheduler is not local, as in increasingly becoming the case.
Otherwise we didn't seem to be getting the baseline config, which was causing errors.
OK, I think that this is good to go. It should be safe both for old and new releases. |
dask_labextension/manager.py
Outdated
), | ||
cores=sum(ws.ncores for ws in cluster.scheduler.workers.values()), | ||
cores=sum(d["nthreads"] for d in info["workers"].values()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the intended compatibility profile for this? I am getting failures here due to nthreads
not being available in my workers with dask v1.2 and distributed v1.28.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2.0+, but it's fairly easy to go handle this to go back further. I'll push up a small fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I'd be in favor of maintaining 1.0 compatibility, if it's a small check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
@ian-r-rose is it safe for me to push out a micro release on the Python side? |
Yeah, absolutely! |
Had a tiny snafu, but eventually got there. Version 1.0.3 is on PyPI
…On Wed, Jul 31, 2019 at 10:06 PM Ian Rose ***@***.***> wrote:
Yeah, absolutely!
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#73?email_source=notifications&email_token=AACKZTHNTKVZF3LOKEA3PDDQCJVNNA5CNFSM4IHNE3Y2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3JKIVA#issuecomment-517121108>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACKZTA3TJYVCZSMDIZA6GTQCJVNNANCNFSM4IHNE3YQ>
.
|
Supercedes #72
This depends on dask/distributed#2902 , which
adds a
Cluster.scheduler_info
attribute to clusters which holdsnecessary scheduler information. We prefer this over querying a
Scheduler object directly in case that scheduler is not local, as in
increasingly becoming the case.