Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump Versions 22.09 #361

Merged
5 commits merged into from
Sep 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,10 +86,10 @@ To run the built "release" container, use the following:
./docker/run_container_release.sh
```

You can specify different Docker images and tags by passing the script the `DOCKER_IMAGE_TAG`, and `DOCKER_IMAGE_TAG` variables respectively. For example, to run version `v22.08.00a` use the following:
You can specify different Docker images and tags by passing the script the `DOCKER_IMAGE_TAG`, and `DOCKER_IMAGE_TAG` variables respectively. For example, to run version `v22.09.00a` use the following:

```bash
DOCKER_IMAGE_TAG="v22.08.00a-runtime" ./docker/run_container_release.sh
DOCKER_IMAGE_TAG="v22.09.00a-runtime" ./docker/run_container_release.sh
```

### Build from Source
Expand Down
6 changes: 3 additions & 3 deletions cmake/dependencies.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -104,17 +104,17 @@ include(deps/Configure_rdkafka)

# SRF (Should come after all third party but before NVIDIA repos)
# =====
set(SRF_VERSION 22.08 CACHE STRING "Which version of SRF to use")
set(SRF_VERSION 22.09 CACHE STRING "Which version of SRF to use")
include(deps/Configure_srf)

# CuDF
# =====
set(CUDF_VERSION "${RAPIDS_VERSION}" CACHE STRING "Which version of cuDF to use")
set(CUDF_VERSION "${MORPHEUS_RAPIDS_VERSION}" CACHE STRING "Which version of cuDF to use")
include(deps/Configure_cudf)

# Triton-client
# =====
set(TRITONCLIENT_VERSION "${RAPIDS_VERSION}" CACHE STRING "Which version of TritonClient to use")
set(TRITONCLIENT_VERSION "${MORPHEUS_RAPIDS_VERSION}" CACHE STRING "Which version of TritonClient to use")
include(deps/Configure_TritonClient)

list(POP_BACK CMAKE_MESSAGE_CONTEXT)
2 changes: 1 addition & 1 deletion cmake/deps/Configure_srf.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ function(find_and_configure_srf version)
"SRF_PYTHON_INPLACE_BUILD OFF"
"SRF_PYTHON_PERFORM_INSTALL ON"
"SRF_PYTHON_BUILD_STUBS ${MORPHEUS_BUILD_PYTHON_STUBS}"
"RMM_VERSION ${RAPIDS_VERSION}"
"RMM_VERSION ${MORPHEUS_RAPIDS_VERSION}"
)

endfunction()
Expand Down
4 changes: 2 additions & 2 deletions cmake/import-rapids-cmake.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@
# limitations under the License.


set(RAPIDS_VERSION "22.08" CACHE STRING "Global default version for all Rapids project dependencies")
set(RAPIDS_CMAKE_VERSION "${RAPIDS_VERSION}" CACHE STRING "Version of rapids-cmake to use")
set(MORPHEUS_RAPIDS_VERSION "22.08" CACHE STRING "Global default version for all Rapids project dependencies")
set(RAPIDS_CMAKE_VERSION "${MORPHEUS_RAPIDS_VERSION}" CACHE STRING "Version of rapids-cmake to use")

# Download and load the repo according to the rapids-cmake instructions if it does not exist
# NOTE: Use a different file than RAPIDS.cmake because MatX will just override it: https://github.com/NVIDIA/MatX/pull/192
Expand Down
2 changes: 1 addition & 1 deletion docker/conda/environments/cuda11.5_dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ dependencies:
- scikit-build=0.13
- sphinx
- sphinx_rtd_theme
- srf 22.08.*
- srf 22.09.*
- sysroot_linux-64=2.17
- tqdm=4
- typing_utils=0.1
Expand Down
4 changes: 2 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,13 @@ To get started, first pull the NGC container:

.. code-block:: console

$ docker pull nvcr.io/nvidia/morpheus/morpheus:22.08-runtime
$ docker pull nvcr.io/nvidia/morpheus/morpheus:22.09-runtime

Launch an interactive container to start using Morpheus:

.. code-block:: console

$ docker run --rm -ti --net=host --gpus=all nvcr.io/nvidia/morpheus/morpheus:22.08-runtime bash
$ docker run --rm -ti --net=host --gpus=all nvcr.io/nvidia/morpheus/morpheus:22.09-runtime bash
(morpheus) root@958a683a8a26:/workspace# morpheus --help
Usage: morpheus [OPTIONS] COMMAND [ARGS]...Options:
--debug / --no-debug [default: False]
Expand Down
12 changes: 6 additions & 6 deletions docs/source/morpheus_quickstart_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ The Morpheus AI Engine consists of the following components:
Follow the below steps to install Morpheus AI Engine:

```bash
$ helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-22.08.tgz --username='$oauthtoken' --password=$API_KEY --untar
$ helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-22.09.tgz --username='$oauthtoken' --password=$API_KEY --untar
```
```bash
$ helm install --set ngc.apiKey="$API_KEY" \
Expand Down Expand Up @@ -207,7 +207,7 @@ replicaset.apps/zookeeper-87f9f4dd 1 1 1 54s
Run the following command to pull the Morpheus SDK Client chart on to your instance:

```bash
$ helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-sdk-client-22.08.tgz --username='$oauthtoken' --password=$API_KEY --untar
$ helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-sdk-client-22.09.tgz --username='$oauthtoken' --password=$API_KEY --untar
```

#### Morpheus SDK Client in Sleep Mode
Expand Down Expand Up @@ -245,7 +245,7 @@ $ kubectl -n $NAMESPACE exec sdk-cli-helper -- cp -RL /workspace/models /common
The Morpheus MLflow Triton Plugin is used to deploy, update, and remove models from the Morpheus AI Engine. The MLflow server UI can be accessed using NodePort 30500. Follow the below steps to install the Morpheus MLflow Triton Plugin:

```bash
$ helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-mlflow-22.08.tgz --username='$oauthtoken' --password=$API_KEY --untar
$ helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-mlflow-22.09.tgz --username='$oauthtoken' --password=$API_KEY --untar
```
```bash
$ helm install --set ngc.apiKey="$API_KEY" \
Expand Down Expand Up @@ -1146,9 +1146,9 @@ This section lists solutions to problems you might encounter with Morpheus or fr



[Morpheus Pipeline Examples]: https://github.com/NVIDIA/Morpheus/tree/branch-22.08/examples
[Morpheus Contribution]: https://github.com/NVIDIA/Morpheus/blob/branch-22.08/CONTRIBUTING.md
[Morpheus Developer Guide]: https://github.com/NVIDIA/Morpheus/tree/branch-22.08/docs/source/developer_guide/guides
[Morpheus Pipeline Examples]: https://github.com/NVIDIA/Morpheus/tree/branch-22.09/examples
[Morpheus Contribution]: https://github.com/NVIDIA/Morpheus/blob/branch-22.09/CONTRIBUTING.md
[Morpheus Developer Guide]: https://github.com/NVIDIA/Morpheus/tree/branch-22.09/docs/source/developer_guide/guides
[Triton Inference Server Model Configuration]: https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md
[NVIDIA’s Cloud Native Core Stack]: https://github.com/NVIDIA/cloud-native-core
[NGC Registry CLI User Guide]: https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_4_1