-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support for mpi4py #190
Conversation
e4f3c1e
to
45d6107
Compare
adaptive/runner.py
Outdated
@@ -693,6 +700,8 @@ def _get_ncores(ex): | |||
return 1 | |||
elif with_distributed and isinstance(ex, distributed.cfexecutor.ClientExecutor): | |||
return sum(n for n in ex._client.ncores().values()) | |||
elif with_mpi4py and isinstance(ex, mpi4py.futures.MPIPoolExecutor): | |||
return mpi4py.MPI.COMM_WORLD.size - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ex.bootup() # wait until all workers are up and running
return executor._pool.size # not public API!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's better, does ex._pool.size
work before all the workers are up and running? Because Adaptive can handle scaling of the pool size.
does this "just work"? Isn't there some extra bits needed for launching workers? We should probably document this somehow... |
@jbweston I'll add some more details to the docs later. In a nutshell, it works when calling your Python script like:
or in a SLURM job
|
In your desktop or laptop, it can also work like this:
Or you can pass In this case, the 15 workers will be MPI-spawned at runtime. I consider this the preferred way of using |
You are hurting my feelings 😉 |
LGTM |
No description provided.