Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding README files for Intel® Data Center Flex Series GPUs #125

Merged
merged 1 commit into from
Mar 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Model Zoo for Intel® Architecture

This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs.

Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://software.intel.com/containers).

Expand Down
23 changes: 23 additions & 0 deletions docs/general/FLEX_DEVCATALOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Model Zoo for Intel® Architecture Workloads Optimized for the Intel® Data Center GPU Flex Series

This document provides links to step-by-step instructions on how to leverage Model Zoo docker containers to run optimized open-source Deep Learning inference workloads using Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* on the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html).

## Base Containers

| AI Framework | Extension | Documentation |
| -----------------------------| ------------- | ----------------- |
| PyTorch | Intel® Extension for PyTorch* | [Intel® Extension for PyTorch Container](https://github.com/IntelAI/models/blob/master/quickstart/ipex-tool-container/gpu/devcatalog.md) |
| TensorFlow | Intel® Extension for TensorFlow* | [Intel® Extension for TensorFlow Container](https://github.com/IntelAI/models/blob/master/quickstart/tf-tool-container/gpu/devcatalog.md)|

## Optimized Workloads

The table below provides links to run each workload in a docker container. The containers are optimized for Linux*.


| Model | Framework | Mode | Documentation | Dataset |
| ----------------------------| ---------- | ----------| ------------------- | ------------ |
| [ResNet 50 v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Inference| [INT8](https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/devcatalog.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) |
| [ResNet 50 v1.5](https://arxiv.org/pdf/1512.03385.pdf) | PyTorch | Inference | [INT8 ](https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/devcatalog.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) |
| [SSD-MobileNet v1](https://arxiv.org/pdf/1704.04861.pdf) | PyTorch | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/devcatalog.md) | [COCO 2017](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets) |
| [YOLO v4](https://arxiv.org/pdf/1704.04861.pdf) | PyTorch | Inference |[INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/devcatalog.md) | [COCO 2017](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets) |
| [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | TensorFlow | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/devcatalog.md)| [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) |
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Running ResNet50 v1.5 Inference with Int8 on Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch*


## Overview

This document has instructions for running ResNet50v1.5 inference using Intel(R) Extension for PyTorch with GPU.

## Requirements
| Item | Detail |
| ------ | ------- |
| Host machine | Intel® Data Center GPU Flex Series |
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
| Software | Docker* Installed |

## Get Started

## Download Datasets

The [ImageNet](http://www.image-net.org/) validation dataset is used.

Download and extract the ImageNet2012 dataset from http://www.image-net.org/,
then move validation images to labeled subfolders, using
[the valprep.sh shell script](https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh)

A after running the data prep script, your folder structure should look something like this:

```
imagenet
└── val
├── ILSVRC2012_img_val.tar
├── n01440764
│ ├── ILSVRC2012_val_00000293.JPEG
│ ├── ILSVRC2012_val_00002138.JPEG
│ ├── ILSVRC2012_val_00003014.JPEG
│ ├── ILSVRC2012_val_00006697.JPEG
│ └── ...
└── ...
```
The folder that contains the `val` directory should be set as the
`DATASET_DIR`
(for example: `export DATASET_DIR=/home/<user>/imagenet`).

## Quick Start Scripts

| Script name | Description |
|-------------|-------------|
| `inference_block_format.sh` | Runs ResNet50 inference (block format) for the specified precision (int8) |

## Run Using Docker

### Set up Docker Image

```
docker pull intel/image-recognition:pytorch-flex-gpu-resnet50v1-5-inference
```
### Run Docker Image
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run the `inference_block_format.sh` quickstart script using this container, you'll need to provide volume mounts for the ImageNet dataset. You will need to provide an output directory where log files will be written.

```
export PRECISION=int8
export OUTPUT_DIR=<path to output directory>
export DATASET_DIR=<path to the preprocessed imagenet dataset>
export SCRIPT=quickstart/inference_block_format.sh

DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
IMAGE_NAME=intel/image-recognition:pytorch-flex-gpu-resnet50v1-5-inference


VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')

test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"

docker run \
-v <your-local-dir>:/workspace \
--group-add ${VIDEO} \
${RENDER_GROUP} \
--device=/dev/dri \
--ipc=host \
--env PRECISION=${PRECISION} \
--env OUTPUT_DIR=${OUTPUT_DIR} \
--env DATASET_DIR=${DATASET_DIR} \
--env http_proxy=${http_proxy} \
--env https_proxy=${https_proxy} \
--env no_proxy=${no_proxy} \
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
--volume ${DATASET_DIR}:${DATASET_DIR} \
${DOCKER_ARGS} \
${IMAGE_NAME} \
/bin/bash $SCRIPT
```

## Documentation and Sources

[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)

## Support
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.

## License Agreement

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details.
Original file line number Diff line number Diff line change
@@ -1,56 +1,65 @@
# ResNet50 v1.5 Inference
# Running ResNet50 v1.5 Inference with Int8 on Intel® Data Center GPU Flex Series using Intel® Extension for TensorFlow*

## Description
## Overview

This document has instructions for running ResNet50 v1.5 inference using
Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.
This document has instructions for running ResNet50 v1.5 inference using Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.

## Datasets

## Requirements
| Item | Detail |
| ------ | ------- |
| Host machine | Intel® Data Center GPU Flex Series |
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
| Software | Docker* Installed |

## Get Started

### Download Datasets

Download and preprocess the ImageNet dataset using the [instructions here](https://github.com/IntelAI/models/blob/master/datasets/imagenet/README.md).
After running the conversion script you should have a directory with the
ImageNet dataset in the TF records format.

Set the `DATASET_DIR` to point to the TF records directory when running ResNet50 v1.5.

## Quick Start Scripts
### Quick Start Scripts

| Script name | Description |
|:-------------:|:-------------:|
| `online_inference` | Runs online inference for int8 precision |
| `online_inference` | Runs online inference for int8 precision |
| `batch_inference` | Runs batch inference for int8 precision |
| `accuracy` | Measures the model accuracy for int8 precision |

## Docker

Requirements:
* Host machine has Intel(R) Data Center GPU Flex Series
* Follow instructions to install GPU-compatible driver [419.40](https://dgpu-docs.intel.com/releases/stable_419_40_20220914.html)
* Docker
## Run Using Docker

### Docker pull command:
### Set up Docker Image

```
docker pull intel/image-recognition:tf-atsm-gpu-resnet50v1-5-inference
docker pull intel/image-recognition:tf-flex-gpu-resnet50v1-5-inference
```

The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run one of the inference quickstart scripts using this container, you'll need to provide volume mounts for the ImageNet dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
### Run Docker Image
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run one of the inference quickstart scripts using this container, you'll need to provide volume mounts for the ImageNet dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.

```
export PRECISION=int8
export OUTPUT_DIR=<path to output directory>
export DATASET_DIR=<path to the preprocessed imagenet dataset>
IMAGE_NAME=intel/image-recognition:tf-atsm-gpu-resnet50v1-5-inference
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
IMAGE_NAME=intel/image-recognition:tf-flex-gpu-resnet50v1-5-inference

VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')

test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"

docker run \
-v <your-local-dir>:/workspace \
--group-add ${VIDEO} \
${RENDER_GROUP} \
--device=/dev/dri \
--ipc=host \
--privileged \
--env PRECISION=${PRECISION} \
--env OUTPUT_DIR=${OUTPUT_DIR} \
--env DATASET_DIR=${DATASET_DIR} \
Expand All @@ -59,16 +68,22 @@ docker run \
--env no_proxy=${no_proxy} \
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
--volume ${DATASET_DIR}:${DATASET_DIR} \
--rm -it \
$IMAGE_NAME \
${DOCKER_ARGS} \
${IMAGE_NAME} \
/bin/bash quickstart/<script name>.sh
```

## Documentation and Sources

**Get Started**
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)

## Summary and Next Steps

Now you are inside container with Python 3.9 and Tensorflow 2.10.0 preinstalled. You can run your own script
to run on intel GPU.

[Docker* Repository](https://hub.docker.com/r/intel/image-recognition)
## Support
Support for Intel® Extension for TensorFlow* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for TensorFlow* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-tensorflow/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.

## License Agreement

Expand Down
87 changes: 87 additions & 0 deletions quickstart/ipex-tool-container/gpu/devcatalog.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Optimizations for Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch*

## Overview

This document has instruction for running Intel® Extension for PyTorch* (IPEX) for
GPU in container.

## Requirements
| Item | Detail |
| ------ | ------- |
| Host machine | Intel® Data Center GPU Flex Series |
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
| Software | Docker* Installed |

## Get Started

### Installing the Intel Extensions for PyTorch
#### Docker pull command:

`docker pull intel/intel-extension-for-pytorch:xpu-flex`

### Running container:

Run following commands to start IPEX GPU tools container. You can use `-v` option to mount your
local directory into container. The `-v` argument can be omitted if you do not need
access to a local directory in the container. Pass the video and render groups to your
docker container so that the GPU is accessible.
```
IMAGE_NAME=intel/intel-extension-for-pytorch:xpu-flex
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}

VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')

test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"

docker run --rm \
-v <your-local-dir>:/workspace \
--group-add ${VIDEO} \
${RENDER_GROUP} \
--device=/dev/dri \
--ipc=host \
-e http_proxy=$http_proxy \
-e https_proxy=$https_proxy \
-e no_proxy=$no_proxy \
${DOCKER_ARGS} \
${IMAGE_NAME} \
bash
```

#### Verify if XPU is accessible from PyTorch:
You are inside container now. Run following command to verify XPU is visible to PyTorch:
```
python -c "import torch;print(torch.device('xpu'))"
```
Sample output looks like below:
```
xpu
```
Then, verify that the XPU device is available to IPEX:
```
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.is_available())"
```
Sample output looks like below:
```
True
```
Finally, use the following command to check whether MKL is enabled as default:
```
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.has_onemkl())"
```
Sample output looks like below:
```
True
```

## Summary and Next Steps
Now you are inside container with Python 3.9, PyTorch and IPEX preinstalled. You can run your own script
to run on Intel GPU.

## Documentation and Sources

[GitHub* Repository](https://github.com/intel/intel-extension-for-pytorch/tree/master/docker)


## Support
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
Loading