-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it easy for people to write pipeline tests in python #203
Comments
Right now it's possible to auto-submit a pipeline when working from a hosted JupyterHub (use the "Notebooks" link in the pipelines UI). See the end of Lightweight components notebook for sample code. I've created an issue to enable submitting the pipelines from local machine: #206 |
So just to summarize we'd like to be able to launch a pipeline like so, from the test deployment: import kfp
client = kfp.Client()
experiment = client.create_experiment(experiment_name)
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) Taking a closer look at kfp.Client, here, class Client(object):
def __init__(self, namespace='kubeflow'):
...
host='ml-pipeline.' + namespace + '.svc.cluster.local:8888'
config = kfp_run.configuration.Configuration()
config.host = host
api_client = kfp_run.api_client.ApiClient(config)
self._run_api = kfp_run.api.run_service_api.RunServiceApi(api_client)
def run_pipeline(self, experiment_id, job_name, pipeline_package_path, params={}):
...
response = self._run_api.create_run(body=run_body) it looks like this would work from any node in the cluster with access to 'ml-pipeline.' + namespace + '.svc.cluster.local:8888' correct? Would that work as long as it was run from the same namespace as where kf pipelines is deployed? Or is there additional complexity in talking to that endpoint? |
Resolving in favor of #360 |
* The namespace argument is useful when running the release workflows manually to test them. Related to kubeflow/kubeflow#1541
* Add autoscaling example * Add gpu example * format with doctoc * Add example yaml files * Address review comments and language fix * Add hyerlink for hey
Upgrade go.mod package versions
When using kubeflow/pipelines, as in kubeflow/examples#322, someone might define a new container op like so:
It would be nice if kubeflow/pipelines made it simple to write a test for this. Currently my understanding is that in order to trigger a pipeline run you have to manually upload the compiled tgz, correct? If it were also possible to programmatically trigger a pipeline run, poll for completion, then check the resulting status - any codebase using pipelines could be made more robust with these sorts of tests. It would also broaden the relevance of pipelines as a convenient means of building test workflows.
/cc @jlewi @texasmichelle
The text was updated successfully, but these errors were encountered: