Skip to content

Commit

Permalink
Merge branch 'develop' into feature/vertexai-experiment-tracker
Browse files Browse the repository at this point in the history
  • Loading branch information
htahir1 authored Dec 25, 2024
2 parents 378fae8 + 2d8b354 commit 0e69570
Show file tree
Hide file tree
Showing 144 changed files with 1,897 additions and 1,055 deletions.
63 changes: 63 additions & 0 deletions .gitbook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -202,3 +202,66 @@ redirects:
docs/reference/how-do-i: reference/how-do-i.md
docs/reference/community-and-content: reference/community-and-content.md
docs/reference/faq: reference/faq.md

# The new Manage ZenML Server redirects
how-to/advanced-topics/manage-zenml-server/: how-to/manage-zenml-server/README.md
how-to/project-setup-and-management/connecting-to-zenml/: how-to/manage-zenml-server/connecting-to-zenml/README.md
how-to/project-setup-and-management/connecting-to-zenml/connect-in-with-your-user-interactive: how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md
how-to/project-setup-and-management/connecting-to-zenml/connect-with-a-service-account: how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md
how-to/advanced-topics/manage-zenml-server/upgrade-zenml-server: how-to/manage-zenml-server/upgrade-zenml-server.md
how-to/advanced-topics/manage-zenml-server/best-practices-upgrading-zenml: how-to/manage-zenml-server/best-practices-upgrading-zenml.md
how-to/advanced-topics/manage-zenml-server/using-zenml-server-in-prod: how-to/manage-zenml-server/using-zenml-server-in-prod.md
how-to/advanced-topics/manage-zenml-server/troubleshoot-your-deployed-server: how-to/manage-zenml-server/troubleshoot-your-deployed-server.md
how-to/advanced-topics/manage-zenml-server/migration-guide/migration-guide: how-to/manage-zenml-server/migration-guide/migration-guide.md
how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-twenty: how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md
how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-thirty: how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md
how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-forty: how-to/manage-zenml-server/migration-guide/migration-zero-forty.md
how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-sixty: how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md

how-to/project-setup-and-management/setting-up-a-project-repository/using-project-templates: how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md
how-to/project-setup-and-management/setting-up-a-project-repository/create-your-own-template: how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md
how-to/project-setup-and-management/setting-up-a-project-repository/shared-components-for-teams: how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md
how-to/project-setup-and-management/setting-up-a-project-repository/stacks-pipelines-models: how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md
how-to/project-setup-and-management/setting-up-a-project-repository/access-management: how-to/project-setup-and-management/collaborate-with-team/access-management.md
how-to/interact-with-secrets: how-to/project-setup-and-management/interact-with-secrets.md

how-to/project-setup-and-management/develop-locally/: how-to/pipeline-development/develop-locally/README.md
how-to/project-setup-and-management/develop-locally/local-prod-pipeline-variants: how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md
how-to/project-setup-and-management/develop-locally/keep-your-dashboard-server-clean: how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md

how-to/advanced-topics/training-with-gpus/: how-to/pipeline-development/training-with-gpus/README.md
how-to/advanced-topics/training-with-gpus/accelerate-distributed-training: how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md

how-to/advanced-topics/run-remote-notebooks/: how-to/pipeline-development/run-remote-notebooks/README.md
how-to/advanced-topics/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells: how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md
how-to/advanced-topics/run-remote-notebooks/run-a-single-step-from-a-notebook: how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md

how-to/infrastructure-deployment/configure-python-environments/: how-to/pipeline-development/configure-python-environments/README.md
how-to/infrastructure-deployment/configure-python-environments/handling-dependencies: how-to/pipeline-development/configure-python-environments/handling-dependencies.md
how-to/infrastructure-deployment/configure-python-environments/configure-the-server-environment: how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md

how-to/infrastructure-deployment/customize-docker-builds/: how-to/customize-docker-builds/README.md
how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline: how-to/customize-docker-builds/docker-settings-on-a-pipeline.md
how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-step: how-to/customize-docker-builds/docker-settings-on-a-step.md
how-to/infrastructure-deployment/customize-docker-builds/use-a-prebuilt-image: how-to/customize-docker-builds/use-a-prebuilt-image.md
how-to/infrastructure-deployment/customize-docker-builds/specify-pip-dependencies-and-apt-packages: how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md
how-to/infrastructure-deployment/customize-docker-builds/how-to-use-a-private-pypi-repository: how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md
how-to/infrastructure-deployment/customize-docker-builds/use-your-own-docker-files: how-to/customize-docker-builds/use-your-own-docker-files.md
how-to/infrastructure-deployment/customize-docker-builds/which-files-are-built-into-the-image: how-to/customize-docker-builds/which-files-are-built-into-the-image.md
how-to/infrastructure-deployment/customize-docker-builds/how-to-reuse-builds: how-to/customize-docker-builds/how-to-reuse-builds.md
how-to/infrastructure-deployment/customize-docker-builds/define-where-an-image-is-built: how-to/customize-docker-builds/define-where-an-image-is-built.md

how-to/data-artifact-management/handle-data-artifacts/datasets: how-to/data-artifact-management/complex-usecases/datasets.md
how-to/data-artifact-management/handle-data-artifacts/manage-big-data: how-to/data-artifact-management/complex-usecases/manage-big-data.md
how-to/data-artifact-management/handle-data-artifacts/unmaterialized-artifacts: how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md
how-to/data-artifact-management/handle-data-artifacts/passing-artifacts-between-pipelines: how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md
how-to/data-artifact-management/handle-data-artifacts/registering-existing-data: how-to/data-artifact-management/complex-usecases/registering-existing-data.md

how-to/advanced-topics/control-logging/: how-to/control-logging/README.md
how-to/advanced-topics/control-logging/view-logs-on-the-dasbhoard: how-to/control-logging/view-logs-on-the-dasbhoard.md
how-to/advanced-topics/control-logging/enable-or-disable-logs-storing: how-to/control-logging/enable-or-disable-logs-storing.md
how-to/advanced-topics/control-logging/set-logging-verbosity: how-to/control-logging/set-logging-verbosity.md
how-to/advanced-topics/control-logging/disable-rich-traceback: how-to/control-logging/disable-rich-traceback.md
how-to/advanced-topics/control-logging/disable-colorful-logging: how-to/control-logging/disable-colorful-logging.md


2 changes: 1 addition & 1 deletion docs/book/component-guide/data-validators/deepchecks.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
```

Then, place the following snippet above your pipeline definition. Note that the path of the `dockerfile` are relative to where the pipeline definition file is. Read [the containerization guide](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) for more details:
Then, place the following snippet above your pipeline definition. Note that the path of the `dockerfile` are relative to where the pipeline definition file is. Read [the containerization guide](../../how-to/customize-docker-builds/README.md) for more details:

```python
import zenml
Expand Down
4 changes: 2 additions & 2 deletions docs/book/component-guide/experiment-trackers/mlflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ zenml stack register custom_stack -e mlflow_experiment_tracker ... --set
{% endtab %}

{% tab title="ZenML Secret (Recommended)" %}
This method requires you to [configure a ZenML secret](../../how-to/interact-with-secrets.md) to store the MLflow tracking service credentials securely.
This method requires you to [configure a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) to store the MLflow tracking service credentials securely.

You can create the secret using the `zenml secret create` command:

Expand All @@ -106,7 +106,7 @@ zenml experiment-tracker register mlflow \
```

{% hint style="info" %}
Read more about [ZenML Secrets](../../how-to/interact-with-secrets.md) in the ZenML documentation.
Read more about [ZenML Secrets](../../how-to/project-setup-and-management/interact-with-secrets.md) in the ZenML documentation.
{% endhint %}
{% endtab %}
{% endtabs %}
Expand Down
4 changes: 2 additions & 2 deletions docs/book/component-guide/experiment-trackers/neptune.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ You need to configure the following credentials for authentication to Neptune:

{% tabs %}
{% tab title="ZenML Secret (Recommended)" %}
This method requires you to [configure a ZenML secret](../../how-to/interact-with-secrets.md) to store the Neptune tracking service credentials securely.
This method requires you to [configure a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) to store the Neptune tracking service credentials securely.

You can create the secret using the `zenml secret create` command:

Expand All @@ -61,7 +61,7 @@ zenml stack register neptune_stack -e neptune_experiment_tracker ... --set
```

{% hint style="info" %}
Read more about [ZenML Secrets](../../how-to/interact-with-secrets.md) in the ZenML documentation.
Read more about [ZenML Secrets](../../how-to/project-setup-and-management/interact-with-secrets.md) in the ZenML documentation.
{% endhint %}

{% endtab %}
Expand Down
4 changes: 2 additions & 2 deletions docs/book/component-guide/experiment-trackers/wandb.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ zenml stack register custom_stack -e wandb_experiment_tracker ... --set
{% endtab %}

{% tab title="ZenML Secret (Recommended)" %}
This method requires you to [configure a ZenML secret](../../how-to/interact-with-secrets.md) to store the Weights & Biases tracking service credentials securely.
This method requires you to [configure a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) to store the Weights & Biases tracking service credentials securely.

You can create the secret using the `zenml secret create` command:

Expand All @@ -79,7 +79,7 @@ zenml experiment-tracker register wandb_tracker \
```

{% hint style="info" %}
Read more about [ZenML Secrets](../../how-to/interact-with-secrets.md) in the ZenML documentation.
Read more about [ZenML Secrets](../../how-to/project-setup-and-management/interact-with-secrets.md) in the ZenML documentation.
{% endhint %}
{% endtab %}
{% endtabs %}
Expand Down
2 changes: 1 addition & 1 deletion docs/book/component-guide/image-builders/gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set
As described in this [Google Cloud Build documentation page](https://cloud.google.com/build/docs/build-config-file-schema#network), Google Cloud Build uses containers to execute the build steps which are automatically attached to a network called `cloudbuild` that provides some Application Default Credentials (ADC), that allow the container to be authenticated and therefore use other GCP services.
By default, the GCP Image Builder is executing the build command of the ZenML Pipeline Docker image with the option `--network=cloudbuild`, so the ADC provided by the `cloudbuild` network can also be used in the build. This is useful if you want to install a private dependency from a GCP Artifact Registry, but you will also need to use a [custom base parent image](../../how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md) with the [`keyrings.google-artifactregistry-auth`](https://pypi.org/project/keyrings.google-artifactregistry-auth/) installed, so `pip` can connect and authenticate in the private artifact registry to download the dependency.
By default, the GCP Image Builder is executing the build command of the ZenML Pipeline Docker image with the option `--network=cloudbuild`, so the ADC provided by the `cloudbuild` network can also be used in the build. This is useful if you want to install a private dependency from a GCP Artifact Registry, but you will also need to use a [custom base parent image](../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md) with the [`keyrings.google-artifactregistry-auth`](https://pypi.org/project/keyrings.google-artifactregistry-auth/) installed, so `pip` can connect and authenticate in the private artifact registry to download the dependency.
```dockerfile
FROM zenmldocker/zenml:latest
Expand Down
2 changes: 1 addition & 1 deletion docs/book/component-guide/image-builders/kaniko.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ For more information and a full list of configurable attributes of the Kaniko im
The Kaniko image builder will create a Kubernetes pod that is running the build. This build pod needs to be able to pull from/push to certain container registries, and depending on the stack component configuration also needs to be able to read from the artifact store:

* The pod needs to be authenticated to push to the container registry in your active stack.
* In case the [parent image](../../how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md#using-a-custom-parent-image) you use in your `DockerSettings` is stored in a private registry, the pod needs to be authenticated to pull from this registry.
* In case the [parent image](../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md#using-a-custom-parent-image) you use in your `DockerSettings` is stored in a private registry, the pod needs to be authenticated to pull from this registry.
* If you configured your image builder to store the build context in the artifact store, the pod needs to be authenticated to read files from the artifact store storage.

ZenML is not yet able to handle setting all of the credentials of the various combinations of container registries and artifact stores on the Kaniko build pod, which is you're required to set this up yourself for now. The following section outlines how to handle it in the most straightforward (and probably also most common) scenario, when the Kubernetes cluster you're using for the Kaniko build is hosted on the same cloud provider as your container registry (and potentially the artifact store). For all other cases, check out the [official Kaniko repository](https://github.com/GoogleContainerTools/kaniko) for more information.
Expand Down
2 changes: 1 addition & 1 deletion docs/book/component-guide/model-deployers/seldon.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ If you want to use a custom persistent storage with Seldon Core, or if you prefe

**Advanced: Configuring a Custom Seldon Core Secret**

The Seldon Core model deployer stack component allows configuring an additional `secret` attribute that can be used to specify custom credentials that Seldon Core should use to authenticate to the persistent storage service where models are located. This is useful if you want to connect Seldon Core to a persistent storage service that is not supported as a ZenML Artifact Store, or if you don't want to configure or use the same credentials configured for your Artifact Store. The `secret` attribute must be set to the name of [a ZenML secret](../../how-to/interact-with-secrets.md) containing credentials configured in the format supported by Seldon Core.
The Seldon Core model deployer stack component allows configuring an additional `secret` attribute that can be used to specify custom credentials that Seldon Core should use to authenticate to the persistent storage service where models are located. This is useful if you want to connect Seldon Core to a persistent storage service that is not supported as a ZenML Artifact Store, or if you don't want to configure or use the same credentials configured for your Artifact Store. The `secret` attribute must be set to the name of [a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) containing credentials configured in the format supported by Seldon Core.

{% hint style="info" %}
This method is not recommended, because it limits the Seldon Core model deployer to a single persistent storage service, whereas using the Artifact Store credentials gives you more flexibility in combining the Seldon Core model deployer with any Artifact Store in the same ZenML stack.
Expand Down
4 changes: 2 additions & 2 deletions docs/book/component-guide/orchestrators/airflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ of your Airflow deployment.
{% hint style="info" %}
ZenML will build a Docker image called `<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>` which includes your code and use
it to run your pipeline steps in Airflow. Check
out [this page](/docs/book/how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn
out [this page](/docs/book/how-to/customize-docker-builds/README.md) if you want to learn
more about how ZenML builds these images and how you can customize them.
{% endhint %}

Expand Down Expand Up @@ -210,7 +210,7 @@ more information on how to specify settings.
#### Enabling CUDA for GPU-backed hardware

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to
follow [the instructions on this page](/docs/book/how-to/advanced-topics/training-with-gpus/README.md) to ensure that it
follow [the instructions on this page](/docs/book/how-to/pipeline-development/training-with-gpus/README.md) to ensure that it
works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full
acceleration.

Expand Down
2 changes: 1 addition & 1 deletion docs/book/component-guide/orchestrators/azureml.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ assign it the correct permissions and use it to [register a ZenML Azure Service
For each pipeline run, ZenML will build a Docker image called
`<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>` which includes your code
and use it to run your pipeline steps in AzureML. Check out
[this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to
[this page](../../how-to/customize-docker-builds/README.md) if you want to
learn more about how ZenML builds these images and how you can customize them.

## AzureML UI
Expand Down
2 changes: 1 addition & 1 deletion docs/book/component-guide/orchestrators/custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,6 @@ To see a full end-to-end worked example of a custom orchestrator, [see here](htt

### Enabling CUDA for GPU-backed hardware

Note that if you wish to use your custom orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
Note that if you wish to use your custom orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.

<figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
Loading

0 comments on commit 0e69570

Please sign in to comment.