-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example of a job with tasks that run other jobs #22
Conversation
I encountered multiple errors while testing this approach. It appears that variable substitutions do not work with this level of nesting. Do you have any suggestions or workarounds to fix this issue? Traceback❯ databricks bundle deploy
Building default...
Uploading sample-1.80.2-py3-none-any.whl...
Uploading bundle files to /Users/myuser/.bundle/sample_core/local/files...
Deploying resources...
Updating deployment state...
Error: terraform apply: exit status 1
Error: cannot create job: Missing required field: settings.tasks.task_key.
with databricks_job.sample_mlops_main_stage,
on bundle.tf.json line 56, in resource.databricks_job.sample_mlops_main_stage:
56: },
Error: cannot update job: Missing required field: new_settings.webhook_notifications.on_failure.id
with databricks_job.sample_mlops_stage_1__models_training_and_serving,
on bundle.tf.json line 1362, in resource.databricks_job.sample_mlops_stage_1__models_training_and_serving:
1362: }, Config files
...
variables:
webhook_notifications_id:
description: The ID of the webhook notification to use for the job.
default: 'xzy-xyz-uxz'
training_and_serving_job1_job_id:
default: ''
...
resources:
jobs:
sample_mlops_stage_1__models_training_and_serving:
name: 'sample_mlops_stage_1__models_training_and_serving'
tasks:
- task_key: job1_id
libraries:
- whl: ../dist/*.whl
run_if: ALL_SUCCESS
email_notifications: {}
run_job_task:
job_id: ${var.job1_id}
...
webhook_notifications:
on_failure:
- id: ${var.webhook_notifications_id} |
@cristian-rincon Which version of the CLI are you using? Support for this was released in v0.214.0. |
I'm using Databricks CLI v0.213.0, i will test with |
@pietern Traceback❯ databricks bundle deploy
Building default...
Uploading sample-1.80.2-py3-none-any.whl...
Uploading bundle files to /Users/myuser/.bundle/sample_core/local/files...
Deploying resources...
Updating deployment state...
Error: terraform apply: exit status 1
Error: Missing required argument
on bundle.tf.json line 51, in resource.databricks_job.sample_mlops_main_stage.task[0].run_job_task:
51: "run_job_task": {},
The argument "job_id" is required, but no definition was found.
resources:
jobs:
sample_mlops_main_stage:
name: 'sample_mlops_main_stage'
tasks:
- task_key: abt
run_job_task:
# job_id: 234902340127014
job_id: ${resources.jobs.sample_mlops_stage_0__job1.id}
email_notifications: {}
libraries:
- whl: ../dist/*.whl
run_if: ALL_SUCCESS |
Could you post this as an issue on the CLI repository, and include the relevant bits of your The YAML looks valid if it is used verbatim. Note that a |
@pietern I changed the job name by removing the double underscore, and it fixed the issue.
|
a3b6216
to
9e8c82e
Compare
Note: this requires databricks/cli#1219.