diff --git a/README.md b/README.md
index f8dde77b535..23da3e1666e 100644
--- a/README.md
+++ b/README.md
@@ -327,7 +327,7 @@ the Apache License Version 2.0.
Projects Showcase
- 🎉 Version 0.61.0 is out. Check out the release notes
+ 🎉 Version 0.62.0 is out. Check out the release notes
here.
🖥️ Download our VS Code Extension here.
diff --git a/RELEASE_NOTES.md b/RELEASE_NOTES.md
index 33aeca5f94d..ea2da8bd5f1 100644
--- a/RELEASE_NOTES.md
+++ b/RELEASE_NOTES.md
@@ -1,4 +1,44 @@
+# 0.62.0
+
+Building on top of the last release, this release adds a new and easy way to deploy a GCP ZenML stack from the dashboard and the CLI. Give it a try by going to the `Stacks` section in the dashboard or running the `zenml stack deploy` command! For more information on this new feature, please do check out [the video and blog](https://www.zenml.io/blog/easy-mlops-pipelines) from our previous release.
+
+We also [updated our Hugging Face integration](https://github.com/zenml-io/zenml/pull/2851) to support the automatic display of an embedded `datasets` preview pane in the ZenML Dashboard whenever you return a `Dataset` from a step. This was recently released by the Hugging Face datasets team and it allows you to easily visualize and inspect your data from the comfort of the dashboard.
+
+## What's Changed
+
+* Fix release action docker limit by @schustmi in https://github.com/zenml-io/zenml/pull/2837
+* Upgrade ruff and yamlfix to latest versions before running formatting by @christianversloot in https://github.com/zenml-io/zenml/pull/2577
+* Fixed edge-case where step run is stored incompletely by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2827
+* Docs for stack registration + deployment wizards by @htahir1 in https://github.com/zenml-io/zenml/pull/2814
+* Make upgrade checks in formatting script optional by @avishniakov in https://github.com/zenml-io/zenml/pull/2839
+* Enable migration testing for version 0.61.0 by @schustmi in https://github.com/zenml-io/zenml/pull/2836
+* One-click GCP stack deployments by @stefannica in https://github.com/zenml-io/zenml/pull/2833
+* Only login to docker for PRs with secret access by @schustmi in https://github.com/zenml-io/zenml/pull/2842
+* Add GCP Stack creation Wizard (CLI) by @avishniakov in https://github.com/zenml-io/zenml/pull/2826
+* Update onboarding by @schustmi in https://github.com/zenml-io/zenml/pull/2794
+* Merged log files in Step Ops steps might be not available on main process, due to merge in the step op by @avishniakov in https://github.com/zenml-io/zenml/pull/2795
+* Fix some broken links, copy paste commands, and made secrets more visible by @htahir1 in https://github.com/zenml-io/zenml/pull/2848
+* Update stack deployment docs and other small fixes by @stefannica in https://github.com/zenml-io/zenml/pull/2846
+* Improved the `StepInterfaceError` message for missing inputs by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2849
+* add image pull secrets to k8s pod settings by @wjayesh in https://github.com/zenml-io/zenml/pull/2847
+* Include apt installation of libgomp1 for docker images with lightgbm by @AlexejPenner in https://github.com/zenml-io/zenml/pull/2813
+* Patch filter mflow by stage by @whoknowsB in https://github.com/zenml-io/zenml/pull/2798
+* Bump mlflow to version 2.14.2 by @christianversloot in https://github.com/zenml-io/zenml/pull/2825
+* Fix Accelerate string arguments passing by @avishniakov in https://github.com/zenml-io/zenml/pull/2845
+* Fix CI by @schustmi in https://github.com/zenml-io/zenml/pull/2850
+* Added some visualizations for the HF dataset by @htahir1 in https://github.com/zenml-io/zenml/pull/2851
+* Fix skypilot versioning for the lambda integration by @wjayesh in https://github.com/zenml-io/zenml/pull/2853
+* Improve custom visualization docs by @htahir1 in https://github.com/zenml-io/zenml/pull/2855
+* Fix list typo by @htahir1 in https://github.com/zenml-io/zenml/pull/2856
+* Endpoint to get existing and prospective resources for service connector by @avishniakov in https://github.com/zenml-io/zenml/pull/2854
+* Databricks integrations by @safoinme in https://github.com/zenml-io/zenml/pull/2823
+
+## New Contributors
+* @whoknowsB made their first contribution in https://github.com/zenml-io/zenml/pull/2798
+
+**Full Changelog**: https://github.com/zenml-io/zenml/compare/0.61.0...0.62.0
+
# 0.61.0
This release comes with a new and easy way to deploy an AWS ZenML stack from the dashboard and the CLI. Give it a try by going to the `Stacks` section in the dashboard or running the `zenml stack deploy` command!
diff --git a/docs/book/component-guide/model-deployers/databricks.md b/docs/book/component-guide/model-deployers/databricks.md
index 24dc94c0754..5b3c1db02fa 100644
--- a/docs/book/component-guide/model-deployers/databricks.md
+++ b/docs/book/component-guide/model-deployers/databricks.md
@@ -61,10 +61,10 @@ Within the `DatabricksServiceConfig` you can configure:
* `model_name`: The name of the model that will be served, this will be used to identify the model in the Databricks Model Registry.
* `model_version`: The version of the model that will be served, this will be used to identify the model in the Databricks Model Registry.
-* `workload_size`: The size of the workload that the model will be serving. This can be `ServedModelInputWorkloadSize.SMALL`, `ServedModelInputWorkloadSize.MEDIUM`, or `ServedModelInputWorkloadSize.LARGE`, you can import this enum from `from databricks.sdk.service.serving import ServedModelInputWorkloadSize`.
+* `workload_size`: The size of the workload that the model will be serving. This can be `Small`, `Medium`, or `Large`.
* `scale_to_zero_enabled`: A boolean flag to enable or disable the scale to zero feature.
* `env_vars`: A dictionary of environment variables to be passed to the model serving container.
-* `workload_type`: The type of workload that the model will be serving. This can be `ServedModelInputWorkloadType.CPU`, `ServedModelInputWorkloadType.GPU_LARGE`, `ServedModelInputWorkloadType.GPU_MEDIUM`, `ServedModelInputWorkloadType.GPU_SMALL`, or `ServedModelInputWorkloadType.MULTIGPU_MEDIUM`, you can import this enum from `from databricks.sdk.service.serving import ServedModelInputWorkloadType`.
+* `workload_type`: The type of workload that the model will be serving. This can be `CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, or `MULTIGPU_MEDIUM`.
* `endpoint_secret_name`: The name of the secret that will be used to secure the endpoint and authenticate requests.
For more information and a full list of configurable attributes of the Databricks Model Deployer, check out the [SDK Docs](https://sdkdocs.zenml.io/latest/integration\_code\_docs/integrations-databricks/#zenml.integrations.databricks.model\_deployers) and Databricks endpoint [code](https://github.com/databricks/databricks\_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/databricks\_hub/hf\_api.py#L6957).
diff --git a/docs/mocked_libs.json b/docs/mocked_libs.json
index 422723326d0..9744212fede 100644
--- a/docs/mocked_libs.json
+++ b/docs/mocked_libs.json
@@ -236,5 +236,10 @@
"azure.core.exceptions",
"azure.mgmt",
"azure.mgmt.resource",
- "kfp.client"
+ "kfp.client",
+ "databricks",
+ "databricks.sdk",
+ "databricks.sdk.service.compute",
+ "databricks.sdk.service.jobs",
+ "databricks.sdk.service.serving"
]
diff --git a/pyproject.toml b/pyproject.toml
index 11f0961b02f..171c6b8f815 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "zenml"
-version = "0.61.0"
+version = "0.62.0"
packages = [{ include = "zenml", from = "src" }]
description = "ZenML: Write production-ready ML code."
authors = ["ZenML GmbH "]
diff --git a/src/zenml/VERSION b/src/zenml/VERSION
index 821e2d60fba..7e9253a37f6 100644
--- a/src/zenml/VERSION
+++ b/src/zenml/VERSION
@@ -1 +1 @@
-0.61.0
\ No newline at end of file
+0.62.0
\ No newline at end of file
diff --git a/src/zenml/cli/stack_components.py b/src/zenml/cli/stack_components.py
index 10c6dcbd377..d0dd557c3ac 100644
--- a/src/zenml/cli/stack_components.py
+++ b/src/zenml/cli/stack_components.py
@@ -138,14 +138,16 @@ def describe_stack_component_command(name_id_or_prefix: str) -> None:
if component_.connector:
# We also need the flavor to get the connector requirements
- flavor = client.get_flavor_by_name_and_type(
+ connector_requirements = client.get_flavor_by_name_and_type(
name=component_.flavor, component_type=component_type
- )
+ ).connector_requirements
+ else:
+ connector_requirements = None
cli_utils.print_stack_component_configuration(
component=component_,
active_status=component_.id == active_component_id,
- connector_requirements=flavor.connector_requirements,
+ connector_requirements=connector_requirements,
)
print_model_url(get_component_url(component_))
diff --git a/src/zenml/constants.py b/src/zenml/constants.py
index f5c4b4ffe62..ae8b84721c3 100644
--- a/src/zenml/constants.py
+++ b/src/zenml/constants.py
@@ -173,6 +173,7 @@ def handle_int_env_var(var: str, default: int = 0) -> int:
"ZENML_PIPELINE_API_TOKEN_EXPIRES_MINUTES"
)
ENV_ZENML_IGNORE_FAILURE_HOOK = "ZENML_IGNORE_FAILURE_HOOK"
+ENV_ZENML_CUSTOM_SOURCE_ROOT = "ZENML_CUSTOM_SOURCE_ROOT"
# ZenML Server environment variables
ENV_ZENML_SERVER_PREFIX = "ZENML_SERVER_"
diff --git a/src/zenml/entrypoints/entrypoint.py b/src/zenml/entrypoints/entrypoint.py
index 9dad3d9e29e..999bd7e688e 100644
--- a/src/zenml/entrypoints/entrypoint.py
+++ b/src/zenml/entrypoints/entrypoint.py
@@ -15,6 +15,7 @@
import argparse
import logging
+import os
import sys
from zenml import constants
@@ -44,7 +45,8 @@ def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument(f"--{ENTRYPOINT_CONFIG_SOURCE_OPTION}", required=True)
args, remaining_args = parser.parse_known_args()
-
+ if os.environ.get(constants.ENV_ZENML_CUSTOM_SOURCE_ROOT):
+ source_utils.set_custom_source_root(source_root=os.getcwd())
entrypoint_config_class = source_utils.load_and_validate_class(
args.entrypoint_config_source,
expected_class=BaseEntrypointConfiguration,
diff --git a/src/zenml/integrations/databricks/flavors/databricks_model_deployer_flavor.py b/src/zenml/integrations/databricks/flavors/databricks_model_deployer_flavor.py
index 97b5528f936..0ce3fe30c18 100644
--- a/src/zenml/integrations/databricks/flavors/databricks_model_deployer_flavor.py
+++ b/src/zenml/integrations/databricks/flavors/databricks_model_deployer_flavor.py
@@ -15,10 +15,6 @@
from typing import TYPE_CHECKING, Dict, Optional, Type
-from databricks.sdk.service.serving import (
- ServedModelInputWorkloadSize,
- ServedModelInputWorkloadType,
-)
from pydantic import BaseModel
from zenml.integrations.databricks import DATABRICKS_MODEL_DEPLOYER_FLAVOR
@@ -37,10 +33,10 @@
class DatabricksBaseConfig(BaseModel):
"""Databricks Inference Endpoint configuration."""
- workload_size: ServedModelInputWorkloadSize
+ workload_size: str
scale_to_zero_enabled: bool = False
env_vars: Optional[Dict[str, str]] = None
- workload_type: Optional[ServedModelInputWorkloadType] = None
+ workload_type: Optional[str] = None
endpoint_secret_name: Optional[str] = None
diff --git a/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator.py b/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator.py
index 1d2d4f6bb54..4588a9dd522 100644
--- a/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator.py
+++ b/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator.py
@@ -29,7 +29,10 @@
from databricks.sdk.service.jobs import Task as DatabricksTask
from zenml.client import Client
-from zenml.constants import METADATA_ORCHESTRATOR_URL
+from zenml.constants import (
+ ENV_ZENML_CUSTOM_SOURCE_ROOT,
+ METADATA_ORCHESTRATOR_URL,
+)
from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import (
DatabricksOrchestratorConfig,
DatabricksOrchestratorSettings,
@@ -66,6 +69,7 @@
DATABRICKS_CLUSTER_DEFAULT_NAME = "zenml-databricks-cluster"
DATABRICKS_SPARK_DEFAULT_VERSION = "15.3.x-scala2.12"
DATABRICKS_JOB_ID_PARAMETER_REFERENCE = "{{job.id}}"
+DATABRICKS_ZENML_DEFAULT_CUSTOM_REPOSITORY_PATH = "."
class DatabricksOrchestrator(WheeledOrchestrator):
@@ -367,6 +371,9 @@ def _construct_databricks_pipeline(
if spark_env_vars:
for key, value in spark_env_vars.items():
env_vars[key] = value
+ env_vars[ENV_ZENML_CUSTOM_SOURCE_ROOT] = (
+ DATABRICKS_ZENML_DEFAULT_CUSTOM_REPOSITORY_PATH
+ )
fileio.rmtree(repository_temp_dir)
diff --git a/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator_entrypoint_config.py b/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator_entrypoint_config.py
index 3039bf11268..b6f74f858ff 100644
--- a/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator_entrypoint_config.py
+++ b/src/zenml/integrations/databricks/orchestrators/databricks_orchestrator_entrypoint_config.py
@@ -22,7 +22,6 @@
from zenml.entrypoints.step_entrypoint_configuration import (
StepEntrypointConfiguration,
)
-from zenml.utils import source_utils
WHEEL_PACKAGE_OPTION = "wheel_package"
DATABRICKS_JOB_ID_OPTION = "databricks_job_id"
@@ -39,15 +38,6 @@ class DatabricksEntrypointConfiguration(StepEntrypointConfiguration):
allowed for Databricks Processor steps from their individual components.
"""
- def __init__(self, arguments: List[str]):
- """Initializes the entrypoint configuration.
-
- Args:
- arguments: Command line arguments to configure this object.
- """
- source_utils.set_custom_source_root(source_root=os.getcwd())
- super().__init__(arguments)
-
@classmethod
def get_entrypoint_options(cls) -> Set[str]:
"""Gets all options required for running with this configuration.
diff --git a/src/zenml/integrations/databricks/services/databricks_deployment.py b/src/zenml/integrations/databricks/services/databricks_deployment.py
index ad968367681..71b4163d363 100644
--- a/src/zenml/integrations/databricks/services/databricks_deployment.py
+++ b/src/zenml/integrations/databricks/services/databricks_deployment.py
@@ -204,6 +204,11 @@ def prediction_url(self) -> Optional[str]:
def provision(self) -> None:
"""Provision or update remote Databricks deployment instance."""
+ from databricks.sdk.service.serving import (
+ ServedModelInputWorkloadSize,
+ ServedModelInputWorkloadType,
+ )
+
tags = []
for key, value in self._get_databricks_deployment_labels().items():
tags.append(EndpointTag(key=key, value=value))
@@ -212,8 +217,12 @@ def provision(self) -> None:
model_name=self.config.model_name,
model_version=self.config.model_version,
scale_to_zero_enabled=self.config.scale_to_zero_enabled,
- workload_type=self.config.workload_type,
- workload_size=self.config.workload_size,
+ workload_type=ServedModelInputWorkloadType(
+ self.config.workload_type
+ ),
+ workload_size=ServedModelInputWorkloadSize(
+ self.config.workload_size
+ ),
)
databricks_endpoint = (
diff --git a/src/zenml/integrations/databricks/utils/databricks_utils.py b/src/zenml/integrations/databricks/utils/databricks_utils.py
index 2bf1dd4bc01..58b371c909a 100644
--- a/src/zenml/integrations/databricks/utils/databricks_utils.py
+++ b/src/zenml/integrations/databricks/utils/databricks_utils.py
@@ -20,6 +20,8 @@
from databricks.sdk.service.jobs import PythonWheelTask, TaskDependency
from databricks.sdk.service.jobs import Task as DatabricksTask
+from zenml import __version__
+
def convert_step_to_task(
task_name: str,
@@ -49,13 +51,8 @@ def convert_step_to_task(
for library in libraries:
db_libraries.append(Library(pypi=PythonPyPiLibrary(library)))
db_libraries.append(Library(whl=zenml_project_wheel))
- # TODO: Remove this hardcoding
db_libraries.append(
- Library(
- pypi=PythonPyPiLibrary(
- "git+https://github.com/zenml-io/zenml.git@feature/databricks-integrations"
- )
- )
+ Library(pypi=PythonPyPiLibrary(f"zenml=={__version__}"))
)
return DatabricksTask(
task_key=task_name,
diff --git a/src/zenml/zen_server/deploy/helm/Chart.yaml b/src/zenml/zen_server/deploy/helm/Chart.yaml
index 26f50b21d9f..09dc33dd437 100644
--- a/src/zenml/zen_server/deploy/helm/Chart.yaml
+++ b/src/zenml/zen_server/deploy/helm/Chart.yaml
@@ -1,6 +1,6 @@
apiVersion: v2
name: zenml
-version: "0.61.0"
+version: "0.62.0"
description: Open source MLOps framework for portable production ready ML pipelines
keywords:
- mlops
diff --git a/src/zenml/zen_server/deploy/helm/README.md b/src/zenml/zen_server/deploy/helm/README.md
index 5cacf9ee3f5..a3149601e57 100644
--- a/src/zenml/zen_server/deploy/helm/README.md
+++ b/src/zenml/zen_server/deploy/helm/README.md
@@ -20,8 +20,8 @@ ZenML is an open-source MLOps framework designed to help you create robust, main
To install the ZenML chart directly from Amazon ECR, use the following command:
```bash
-# example command for version 0.61.0
-helm install my-zenml oci://public.ecr.aws/zenml/zenml --version 0.61.0
+# example command for version 0.62.0
+helm install my-zenml oci://public.ecr.aws/zenml/zenml --version 0.62.0
```
Note: Ensure you have OCI support enabled in your Helm client and that you are authenticated with Amazon ECR.
diff --git a/src/zenml/zen_stores/migrations/versions/0.62.0_release.py b/src/zenml/zen_stores/migrations/versions/0.62.0_release.py
new file mode 100644
index 00000000000..5eaabfd328c
--- /dev/null
+++ b/src/zenml/zen_stores/migrations/versions/0.62.0_release.py
@@ -0,0 +1,23 @@
+"""Release [0.62.0].
+
+Revision ID: 0.62.0
+Revises: b4fca5241eea
+Create Date: 2024-07-15 14:15:45.347033
+
+"""
+
+# revision identifiers, used by Alembic.
+revision = "0.62.0"
+down_revision = "b4fca5241eea"
+branch_labels = None
+depends_on = None
+
+
+def upgrade() -> None:
+ """Upgrade database schema and/or data, creating a new revision."""
+ pass
+
+
+def downgrade() -> None:
+ """Downgrade database schema and/or data back to the previous revision."""
+ pass