Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle global references and remove localize_vars while serializing to other workers. #19000

Closed
amitmurthy opened this issue Oct 18, 2016 · 3 comments
Labels
domain:parallelism Parallel or distributed computation

Comments

@amitmurthy
Copy link
Contributor

Copying @JeffBezanson 's comment from another issue - #15451 (comment)

"I think the right solution is to look through the IR for the function when it's serialized, and send over all its dependencies. This could also potentially let us remove the localize_vars hack. We can also keep track of which items from Main we've sent to which processors and avoid re-sending, which would be a helpful optimization."

This will help in removing a major source of confusion in serialized closures which capture global variables vs local ones.

@amitmurthy amitmurthy added the domain:parallelism Parallel or distributed computation label Oct 18, 2016
@amitmurthy
Copy link
Contributor Author

We can also keep track of which items from Main we've sent to which processors and avoid re-sending

Some thoughts/questions:

  • Doing this should do away with the need for @everywhere for function definitions, since they will be automatically sent from the master.
  • Limit the automatic sending to calls from the master only? Worker-worker calls will assume that the definitions have been defined on the target process?
  • Should modules also be automatically loaded on the target worker if referenced during a deserialization? This would require deserialization to read the complete current message (possible as we now have a message boundary), pause the deserialization, load the missing module over the same stream and then continue the deserialization from the temporary buffer. Will do away with the need for @everywhere using - required modules are loaded on demand.

@bdeonovic
Copy link
Contributor

Is there a time frame on this fix?

@amitmurthy
Copy link
Contributor Author

Closed by #19594

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:parallelism Parallel or distributed computation
Projects
None yet
Development

No branches or pull requests

2 participants