Skip to content

Latest commit

 

History

History
97 lines (81 loc) · 3.21 KB

File metadata and controls

97 lines (81 loc) · 3.21 KB

UNet FP32 inference

Description

This document has instructions for running UNet FP32 inference using Intel Optimized TensorFlow.

Quick Start Scripts

Script name Description
fp32_inference.sh Runs inference with a batch size of 1 using a pretrained model

Run the model

Setup your environment using the instructions below, depending on if you are using AI Kit:

Setup using AI Kit Setup without AI Kit

AI Kit does not currently support TF 1.15.2 models

To run without AI Kit you will need:

  • Python 3
  • intel-tensorflow==1.15.2
  • numactl
  • numpy==1.16.3
  • Pillow>=9.3.0
  • matplotlib
  • click
  • wget
  • A clone of the Model Zoo repo
    git clone https://github.com/IntelAI/models.git

Running UNet also requires a clone of the tf_unet repository with PR #276 to get cpu optimizations. Set the TF_UNET_DIR env var to the path of your clone.

git clone https://github.com/jakeret/tf_unet.git
cd tf_unet/
git fetch origin pull/276/head:cpu_optimized
git checkout cpu_optimized
export TF_UNET_DIR=$(pwd)
cd ..

Download and extract the pretrained model and set the path to the PRETRAINED_MODEL env var.

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/unet_fp32_pretrained_model.tar.gz
tar -xvf unet_fp32_pretrained_model.tar.gz
export PRETRAINED_MODEL=$(pwd)/unet_trained

After your environment is setup, set an environment variable to an OUTPUT_DIR where log files will be written. Ensure that you already have the TF_UNET_DIR and PRETRAINED_MODEL paths set from the previous commands. Once the environment variables are all set, you can run a quickstart script.

# cd to your model zoo directory
cd models

export OUTPUT_DIR=<path to the directory where log files will be written>
export TF_UNET_DIR=<path to the TF UNet directory tf_unet>
export PRETRAINED_MODEL=<path to the pretrained model>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>

./quickstart/image_segmentation/tensorflow/unet/inference/cpu/fp32/fp32_inference.sh

Additional Resources