diff --git a/modules/enabling-habana-gaudi-devices.adoc b/modules/enabling-habana-gaudi-devices.adoc index f663b209..51ba3cbf 100644 --- a/modules/enabling-habana-gaudi-devices.adoc +++ b/modules/enabling-habana-gaudi-devices.adoc @@ -59,4 +59,5 @@ The *Add toleration* dialog opens. [role='_additional-resources'] .Additional resources -* link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator for OpenShift]. +* link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.10 for OpenShift]. +* link:https://docs.habana.ai/en/v1.13.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.13 for OpenShift]. diff --git a/modules/habana-gaudi-integration.adoc b/modules/habana-gaudi-integration.adoc index a3f22947..acd66838 100644 --- a/modules/habana-gaudi-integration.adoc +++ b/modules/habana-gaudi-integration.adoc @@ -4,20 +4,22 @@ = Habana Gaudi integration [role='_abstract'] -To accelerate your high-performance deep learning (DL) models, you can integrate Habana Gaudi devices in {productname-short}. {productname-short} also includes the HabanaAI notebook image. This notebook image is pre-built and ready for your data scientists to use after you install or upgrade {productname-short}. +To accelerate your high-performance deep learning (DL) models, you can integrate Habana Gaudi devices in {productname-short}. {productname-short} also includes the HabanaAI workbench image, which is pre-built and ready for your data scientists to use after you install or upgrade {productname-short}. -Before you can successfully enable Habana Gaudi devices in {productname-short}, you must install the necessary dependencies and install the HabanaAI Operator. This allows your data scientists to use Habana libraries and software associated with Habana Gaudi devices from their notebooks. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator for OpenShift]. +Before you can enable Habana Gaudi devices in {productname-short}, you must install the necessary dependencies and the version of the HabanaAI Operator that matches the Habana version of the HabanaAI workbench image in your deployment. This allows your data scientists to use Habana libraries and software associated with Habana Gaudi devices from their workbench. + +For more information about how to enable your OpenShift environment for Habana Gaudi devices, see link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.10 for OpenShift] and link:https://docs.habana.ai/en/v1.13.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.13 for OpenShift]. [IMPORTANT] ==== Currently, Habana Gaudi integration is only supported in OpenShift {ocp-minimum-version}. -You can use Habana Gaudi accelerators on {productname-short} with version 1.10.0 of the Habana Gaudi Operator. For information about the supported configurations for version 1.10 of the Habana Gaudi Operator, see link:https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix_v1.10.0.html#support-matrix-1-10-0[Support Matrix v1.10.0]. +You can use Habana Gaudi accelerators on {productname-short} with versions 1.10.0 and 1.13.0 of the Habana Gaudi Operator. The version of the HabanaAI Operator that you install must match the Habana version of the HabanaAI workbench image in your deployment. This means that only one version of HabanaAI workbench image will work for you at a time. -In addition, the version of the HabanaAI Operator that you install must match the version of the HabanaAI notebook image in your deployment. +For information about the supported configurations for versions 1.10 and 1.13 of the Habana Gaudi Operator, see link:https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix_v1.10.0.html#support-matrix-1-10-0[Support Matrix v1.10.0] and link:https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix_v1.13.0.html#support-matrix-1-13-0[Support Matrix v1.13.0]. ==== -You can use Habana Gaudi devices in an Amazon EC2 DL1 instance on OpenShift. Therefore, your OpenShift platform must support EC2 DL1 instances. Habana Gaudi accelerators are available to your data scientists when they create a workbench, serve a model, and create a notebook. +You can use Habana Gaudi devices in an Amazon EC2 DL1 instance on OpenShift. Therefore, your OpenShift platform must support EC2 DL1 instances. Habana Gaudi accelerators are available to your data scientists when they create a workbench instance or serve a model. To identify the Habana Gaudi devices present in your deployment, use the `lspci` utility. For more information, see link:https://linux.die.net/man/8/lspci[lspci(8) - Linux man page]. @@ -30,8 +32,10 @@ Before you can use your Habana Gaudi devices, you must enable them in your OpenS [role="_additional-resources"] .Additional resources -* link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator for OpenShift] +* link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.10 for OpenShift] +* link:https://docs.habana.ai/en/v1.13.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.13 for OpenShift] * link:https://linux.die.net/man/8/lspci[lspci(8) - Linux man page] * link:https://aws.amazon.com/ec2/instance-types/dl1/[Amazon EC2 DL1 Instances] * link:https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix_v1.10.0.html#support-matrix-1-10-0[Support Matrix v1.10.0] +* link:https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix_v1.13.0.html#support-matrix-1-13-0[Support Matrix v1.13.0] * link:https://access.redhat.com/solutions/4870701[What version of the Kubernetes API is included with each OpenShift 4.x release?] diff --git a/modules/options-for-notebook-server-environments.adoc b/modules/options-for-notebook-server-environments.adoc index ed04dff3..70064c9e 100644 --- a/modules/options-for-notebook-server-environments.adoc +++ b/modules/options-for-notebook-server-environments.adoc @@ -21,10 +21,9 @@ ifndef::upstream[] -- Notebook images are supported for a minimum of one year. Major updates to preconfigured notebook images occur about every six months. Therefore, two supported notebook image versions are typically available at any given time. Legacy notebook image versions, that is, not the two most recent versions, might still be available for selection. Legacy image versions include a label that indicates the image is out-of-date. -From {productname-short} 2.5, version 1.2 of notebook images is no longer supported. Notebooks that are already running on version 1.2 of an image will continue to work normally, but it is not available to select for new users or notebooks. +Versions 1.2 and 2023.1 of notebook images are no longer supported. Notebooks that are already running on versions 1.2 or 2023.1 of an image will continue to work normally, but they are not available to select for new users or notebooks. To use the latest package versions, {org-name} recommends that you use the most recently added notebook image. - -- endif::[] + @@ -43,43 +42,65 @@ endif::[] | Image name | Image version | Preinstalled packages .3+| CUDA -| 2023.2 (Recommended) +| 2024.1 (Recommended) +a| * CUDA 12.1 +* Python 3.9 +* JupyterLab 3.6 +* Notebook 6.5 + +| 2023.2 a| * CUDA 11.8 * Python 3.9 * JupyterLab 3.6 * Notebook 6.5 -| 2023.1 +| 2023.1 (Deprecated) a| * CUDA 11.8 * Python 3.9 * JupyterLab 3.5 * Notebook 6.5 -| 1.2 -a| * CUDA 11.4 -* Python 3.8 -* JupyterLab 3.2 -* Notebook 6.4 - .3+| Minimal Python (default) +| 2024.1 (Recommended) +a| * Python 3.9 +* JupyterLab 3.6 +* Notebook 6.5 -| 2023.2 (Recommended) +| 2023.2 a| * Python 3.9 * JupyterLab 3.6 * Notebook 6.5 -| 2023.1 +| 2023.1 (Deprecated) a| * Python 3.9 * JupyterLab 3.5 * Notebook 6.5 -| 1.2 -a| * Python 3.8 -* JupyterLab 3.2 -* Notebook 6.4 - .3+| PyTorch -| 2023.2 (Recommended) +| 2024.1 (Recommended) +a| * CUDA 12.1 +* Python 3.9 +* PyTorch 2.2 +* JupyterLab 3.6 +* Notebook 6.5 +* TensorBoard 2.16 +* Boto3 1.34 +* Kafka-Python 2.0 +* Kfp 2.7 +* Matplotlib 3.8 +* Numpy 1.26 +* Pandas 2.2 +* Scikit-learn 1.4 +* SciPy 1.12 +* ODH-Elyra 3.16 +* PyMongo 4.6 +* Pyodbc 5.1 +* Codeflare-SDK 0.14 +* Sklearn-onnx 1.16 +* Psycopg 3.1 +* MySQL Connector/Python 8.3 + +| 2023.2 a| * CUDA 11.8 * Python 3.9 * PyTorch 2.0 @@ -102,7 +123,7 @@ a| * CUDA 11.8 * Psycopg 3.1 * MySQL Connector/Python 8.0 -| 2023.1 +| 2023.1 (Deprecated) a| * CUDA 11.8 * Python 3.9 * PyTorch 1.13 @@ -119,23 +140,28 @@ a| * CUDA 11.8 * SciPy 1.10 * Elyra 3.15 -| 1.2 -a| * CUDA 11.4 -* Python 3.8 -* PyTorch 1.8 -* JupyterLab 3.2 -* Notebook 6.4 -* TensorBoard 2.6 -* Boto3 1.17 +.3+| Standard Data Science +| 2024.1 (Recommended) +a| * Python 3.9 +* JupyterLab 3.6 +* Notebook 6.5 +* Boto3 1.34 * Kafka-Python 2.0 -* Matplotlib 3.4 -* Numpy 1.19 -* Pandas 1.2 -* Scikit-learn 0.24 -* SciPy 1.6 +* Kfp 2.7 +* Matplotlib 3.8 +* Pandas 2.2 +* Numpy 1.26 +* Scikit-learn 1.4 +* SciPy 1.12 +* ODH-Elyra 3.16 +* PyMongo 4.6 +* Pyodbc 5.1 +* Codeflare-SDK 0.14 +* Sklearn-onnx 1.16 +* Psycopg 3.1 +* MySQL Connector/Python 8.3 -.3+| Standard Data Science -| 2023.2 (Recommended) +| 2023.2 a| * Python 3.9 * JupyterLab 3.6 * Notebook 6.5 @@ -155,7 +181,7 @@ a| * Python 3.9 * Psycopg 3.1 * MySQL Connector/Python 8.0 -| 2023.1 +| 2023.1 (Deprecated) a| * Python 3.9 * JupyterLab 3.5 * Notebook 6.5 @@ -169,20 +195,31 @@ a| * Python 3.9 * SciPy 1.10 * Elyra 3.15 -| 1.2 -a| * Python 3.8 -* JupyterLab 3.2 -* Notebook 6.4 -* Boto3 1.17 +.3+| TensorFlow +| 2024.1 (Recommended) +a| * CUDA 12.1 +* Python 3.9 +* JupyterLab 3.6 +* Notebook 6.5 +* TensorFlow 2.15 +* TensorBoard 2.15 +* Boto3 1.34 * Kafka-Python 2.0 -* Matplotlib 3.4 -* Pandas 1.2 -* Numpy 1.19 -* Scikit-learn 0.24 -* SciPy 1.6 +* Kfp 2.7 +* Matplotlib 3.8 +* Numpy 1.26 +* Pandas 2.2 +* Scikit-learn 1.4 +* SciPy 1.12 +* ODH-Elyra 3.16 +* PyMongo 4.6 +* Pyodbc 5.1 +* Codeflare-SDK 0.14 +* Sklearn-onnx 1.16 +* Psycopg 3.1 +* MySQL Connector/Python 8.3 -.3+| TensorFlow -| 2023.2 (Recommended) +| 2023.2 a| * CUDA 11.8 * Python 3.9 * JupyterLab 3.6 @@ -205,7 +242,7 @@ a| * CUDA 11.8 * Psycopg 3.1 * MySQL Connector/Python 8.0 -| 2023.1 +| 2023.1 (Deprecated) a| * CUDA 11.8 * Python 3.9 * JupyterLab 3.5 @@ -222,23 +259,29 @@ a| * CUDA 11.8 * SciPy 1.10 * Elyra 3.15 -| 1.2 -a| * CUDA 11.4 -* Python 3.8 -* JupyterLab 3.2 -* Notebook 6.4 -* TensorFlow 2.7 -* TensorBoard 2.6 -* Boto3 1.17 +.3+| TrustyAI +| 2024.1 (Recommended) +a| * Python 3.9 +* JupyterLab 3.6 +* Notebook 6.5 +* TrustyAI 0.5 +* Boto3 1.34 * Kafka-Python 2.0 -* Matplotlib 3.4 -* Numpy 1.19 -* Pandas 1.2 -* Scikit-learn 0.24 -* SciPy 1.6 - -.2+| TrustyAI -| 2023.2 (Recommended) +* Kfp 2.7 +* Matplotlib 3.6 +* Numpy 1.24 +* Pandas 1.5 +* Scikit-learn 1.4 +* SciPy 1.12 +* ODH-Elyra 3.16 +* PyMongo 4.6 +* Pyodbc 5.1 +* Codeflare-SDK 0.14 +* Sklearn-onnx 1.16 +* Psycopg 3.1 +* MySQL Connector/Python 8.3 + +| 2023.2 a| * Python 3.9 * JupyterLab 3.6 * Notebook 6.5 @@ -259,7 +302,7 @@ a| * Python 3.9 * Psycopg 3.1 * MySQL Connector/Python 8.0 -| 2023.1 +| 2023.1 (Deprecated) a| * Python 3.9 * JupyterLab 3.5 * Notebook 6.5 @@ -274,8 +317,25 @@ a| * Python 3.9 * SciPy 1.10 * Elyra 3.15 -| HabanaAI -| 2023.2 (Recommended) +.2+| HabanaAI +| 2024.1 (Recommended) +a|* Python 3.8 +* Habana 1.13 +* JupyterLab 3.6 +* Boto3 1.34 +* Kafka-Python 2.0 +* Kfp 2.7 +* Matplotlib 3.7 +* Numpy 1.23 +* Pandas 2.0 +* Scikit-learn 1.3 +* Scipy 1.10 +* TensorFlow 2.13 +* PyTorch 2.1 +* ODH-Elyra v3.16 + + +| 2023.2 a| * Python 3.8 * Habana 1.10 * JupyterLab 3.5 @@ -292,12 +352,27 @@ a| * Python 3.8 * Elyra 3.15 ifndef::upstream[] -| code-server (Technology Preview) +.2+| code-server (Technology Preview) endif::[] ifdef::upstream[] -| code-server +.2+| code-server endif::[] -| 2023.2 (Recommended) +| 2024.1 (Recommended) +a| * Python 3.9 +* Boto3 1.29 +* Kafka-Python 2.0 +* Matplotlib 3.8 +* Numpy 1.26 +* Pandas 2.1 +* Plotly 5.18 +* Scikit-learn 1.3 +* Scipy 1.11 +* Sklearn-onnx 1.15 +* Ipykernel 6.26 +* (code-server plugin) Python 2024.2.1 +* (code-server plugin) Jupyter 2023.9.100 + +| 2023.2 a| * Python 3.9 * Boto3 1.29 * Kafka-Python 2.0 @@ -314,7 +389,7 @@ a| * Python 3.9 ifdef::upstream[] | RStudio Server -| 2023.2 (Recommended) +| 2024.1 (Recommended) a| * Python 3.9 * R 4.3 endif::[] @@ -322,7 +397,7 @@ endif::[] ifndef::upstream[] ifdef::cloud-service[] | RStudio Server (Technology preview) -| 2023.2 (Recommended) +| 2024.1 (Recommended) a| * Python 3.9 * R 4.3 [IMPORTANT] @@ -335,19 +410,20 @@ endif::[] ifdef::upstream[] | CUDA - RStudio Server -| 2023.2 (Recommended) +| 2024.1 (Recommended) a| * Python 3.9 -* CUDA 11.8 +* CUDA 12.1 * R 4.3 endif::[] ifndef::upstream[] ifdef::cloud-service[] | CUDA - RStudio Server (Technology preview) -| 2023.2 (Recommended) +| 2024.1 (Recommended) a| * Python 3.9 -* CUDA 11.8 +* CUDA 12.1 * R 4.3 + [IMPORTANT] ==== *Disclaimer:* + diff --git a/modules/overview-of-accelerators.adoc b/modules/overview-of-accelerators.adoc index 448191ce..ed4a8be2 100644 --- a/modules/overview-of-accelerators.adoc +++ b/modules/overview-of-accelerators.adoc @@ -19,13 +19,15 @@ If you work with large data sets, you can use accelerators to optimize the perfo * Habana Gaudi devices (HPUs) ** Habana, an Intel company, provides hardware accelerators intended for deep learning workloads. You can use the Habana libraries and software associated with Habana Gaudi devices available from your notebook. ** Before you can successfully enable Habana Gaudi devices on {productname-short}, you must install the necessary dependencies and version 1.10 of the HabanaAI Operator. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator for OpenShift]. +** Before you can enable Habana Gaudi devices in {productname-short}, you must install the necessary dependencies and the version of the HabanaAI Operator that matches the Habana version of the HabanaAI workbench image in your deployment. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.10 for OpenShift] and link:https://docs.habana.ai/en/v1.13.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.13 for OpenShift]. ** You can enable Habana Gaudi devices on-premises or with AWS DL1 compute nodes on an AWS instance. Before you can use an accelerator in {productname-short}, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the *Settings* -> *Accelerator profiles* page on the {productname-short} dashboard. If your deployment contains existing accelerators that had associated accelerator profiles already configured, an accelerator profile is automatically created after you upgrade to the latest version of {productname-short}. [role="_additional-resources"] .Additional resources -* link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator for OpenShift] +* link:https://docs.habana.ai/en/v1.10.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.10 for OpenShift] +* link:https://docs.habana.ai/en/v1.13.0/Orchestration/HabanaAI_Operator/index.html[HabanaAI Operator v1.13 for OpenShift] * link:https://habana.ai/[Habana, an Intel Company] * link:https://aws.amazon.com/ec2/instance-types/dl1/[Amazon EC2 DL1 Instances] * link:https://linux.die.net/man/8/lspci[lspci(8) - Linux man page]