-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Support for task orchestration #747
Comments
Hi @KoenR3 , do you actively use current Let's wait a bit until more votes are added for this issue. |
Hey @nfx , yes we actively use it but in our production environment we have not yet switched to task orchestration so that we can keep using Terraform (we are currently using an external airflow scheduler for the complex scheduling). We have approximately 15 users on the workspace, with about 30 jobs provisioned through Terraform and we expect to onboard more in the coming year. It is not an urgent feature, as I said. Just to put it on the roadmap because when the features becomes GA, the switch between APIs might break the current implementation |
Similar use case as above. Too bad there isn't reverse compatibility with the API. But no one likes throwing vX in api paths. |
Ok, turns out that i was wrong about the API. I've activated pipelines in our dev env and terraform still allows deploying jobs. |
Our Organization would also like to see support for newly released MULTI-TASK Jobs. now in public preview since end of july. https://docs.databricks.com/data-engineering/jobs/index.html |
We at Rivian would also love to see the Tasks orchestration feature supported via Terraform. This would help us to define dependency DAGs for our jobs and handle retries, etc. |
* provider has to be initialized with `use_multitask_jobs = true` * `task` block of `databricks_job` is currently slice, so adding and removing different tasks might cause confusing, but still correct diffs * we may explore `tf:slice_set` mechanics for `task` blocks, though initial testing turned out to be harder to test * `always_running` parameter still has to be tested for API 2.1 compatibility This implements feature #747
Support would be added in v0.3.9 |
* provider has to be initialized with `use_multitask_jobs = true` * `task` block of `databricks_job` is currently slice, so adding and removing different tasks might cause confusing, but still correct diffs * we may explore `tf:slice_set` mechanics for `task` blocks, though initial testing turned out to be harder to test * `always_running` parameter still has to be tested for API 2.1 compatibility This implements feature #747
* provider has to be initialized with `use_multitask_jobs = true` * `task` block of `databricks_job` is currently slice, so adding and removing different tasks might cause confusing, but still correct diffs * we may explore `tf:slice_set` mechanics for `task` blocks, though initial testing turned out to be harder to test * `always_running` parameter still has to be tested for API 2.1 compatibility This implements feature #747
Thanks for the release, it's working like a charm! For future readers of the repo facing the same requirements than me: While it's not specified in the current doc, you can create as many dependencies as you need by adding more depends_on{} blocks. You can declare only one dependency per block however, trying to declare multiple task_key, or to pass an array of strings instead of a string won't work. |
Hey @dugernierg |
@ravulachetan it should be enabled on all new workspaces. otherwise you can enable it through UI in workspace settings. otherwise please check with your databricks representative on the undocumented property for |
@dugernierg; thanks for quick response. |
* provider has to be initialized with `use_multitask_jobs = true` * `task` block of `databricks_job` is currently slice, so adding and removing different tasks might cause confusing, but still correct diffs * we may explore `tf:slice_set` mechanics for `task` blocks, though initial testing turned out to be harder to test * `always_running` parameter still has to be tested for API 2.1 compatibility This implements feature databricks#747
By activating job orchestration, the new job API needs to be used.
https://docs.databricks.com/data-engineering/jobs/jobs-api-updates.html
Adding this as a feature request for some next release
The text was updated successfully, but these errors were encountered: