Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimentally expose the rust Scheduler to v1 Tasks and test the integration #5100

Closed
stuhood opened this issue Nov 15, 2017 · 5 comments
Closed

Comments

@stuhood
Copy link
Member

stuhood commented Nov 15, 2017

The goal of this task is to experimentally expose the rust Scheduler to v1 Tasks so that they can directly/synchronously:

  1. execute product requests (which can trigger things like Port IsolatedProcess implementation to rust, and align with bazel remoting API #4397's process execution)
  2. synchronously call into Snapshot APIs that will be implemented by Library and CLI tool for manipulating snapshots #4585

Capturing Snapshots and executing isolated processes by poking the scheduler directly is sufficient for writing integration tests, and to allow for compile and test to be ported to remote process execution.

But this will be an experimental/private API, because it will not be possible to use it to cache (in memory in the rust Graph) any of these executions, since by the time v1 Tasks are running, we have already forked. #4769 covers the design for a longer term, cacheable integration between v1 Tasks and v2 products.

@stuhood
Copy link
Member Author

stuhood commented Nov 15, 2017

. @illicitonion has done a lot to prepare for this in this branch: master...twitter:dwagnerhall/cloc ... it seems likely that picking that up and running with it would be the right approach.

@stuhood
Copy link
Member Author

stuhood commented Nov 15, 2017

@illicitonion, @ity : There would seem to be some negotiation needed between you two with regard to #4397 and this ticket. The branch you posted implements parts of this issue and parts of #4397.

@ity
Copy link
Contributor

ity commented Nov 15, 2017

@stuhood sounds good. thanks for creating this ticket. and will coordinate with @illicitonion as soon as I get started on this.

@stuhood
Copy link
Member Author

stuhood commented Dec 25, 2017

From @illicitonion:

Maybe we could expose a synchronous_product_request method, so this would look like:

root_entries = self.context.synchronous_product_request([ExecuteProcessResult], [ExecuteProcessRequest(cmd, dict(os.environ))])

and which throws if the ExecutionResult is an error, nicely hiding the awkward details of the ffi API? This is probably something we should do in this PR, because exposing private symbols is sketchy.

I'd also be tempted to give Context an execute_subprocess method which looks something like:

def execute_subprocess(self, execute_process_request):
  if self._scheduler:
    return self.synchronous_product_request([ExecuteProcessResult], [execute_process_request])
  else:
    with self.new_workunit(
      cmd=' '.join(execute_process_request.args),
      ...
    ):
      return ExecuteProcessResult.from_subprocess_result(subprocess.check_call(cmd, ...))

because this is going to be a common thing, but I can believe we'd want to hold off on that until we have a second caller.

@stuhood
Copy link
Member Author

stuhood commented Apr 23, 2018

All of the bits and pieces of this are now implemented. The remainder is covered by #5710.

@stuhood stuhood closed this as completed Apr 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants