Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send less data to remote transient workers #434

Closed
wlandau opened this issue Jun 28, 2018 · 5 comments
Closed

Send less data to remote transient workers #434

wlandau opened this issue Jun 28, 2018 · 5 comments

Comments

@wlandau
Copy link
Member

wlandau commented Jun 28, 2018

In make(parallelism = "future") (future.R) each transient worker only needs the imports and the direct target dependencies. The other targets do not need to be sent.

@kendonB
Copy link
Contributor

kendonB commented Jun 29, 2018

Doesn't future handle this automatically for elements in the global environment? And for other items, shouldn't the workers just read from the .drake folder themselves?

@wlandau
Copy link
Member Author

wlandau commented Jun 29, 2018

Yes, future uses the globals package to detect elements of the global environment, but drake is using its own dependency detection technique, and the results may disagree.

@wlandau
Copy link
Member Author

wlandau commented Jun 29, 2018

And because of NFS file latency issues, I would rather not have transient workers read from the .drake folder before building their targets. I think we can find a win-win here.

@wlandau
Copy link
Member Author

wlandau commented Jun 30, 2018

Like many things, this will be much easier after #440.

@wlandau
Copy link
Member Author

wlandau commented Jul 3, 2018

On second thought, the environment is pruned before the globals are chosen, so any affected users can just choose make(pruning_strategy = "memory").

@wlandau wlandau closed this as completed Jul 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants