Skip to content

Release 1.4.0

Compare
Choose a tag to compare
@jitendra42 jitendra42 released this 03 Jul 20:13
3a8206d

Release 1.4.0

New scripts:

  • lm-1b FP32 inference
  • MobileNet V1 Int8 inference
  • DenseNet 169 FP32 inference
  • SSD-VGG16 FP32 and Int8 inference
  • SSD-ResNet34 Int8 inference
  • ResNet50 v1.5 FP32 and Int8 inference
  • Inception V3 FP32 inference using TensorFlow Serving

Other script changes and bug fixes:

  • Updated SSD-MobileNet accuracy script to take a full path to the coco_val.records, rather than a directory
  • Added a deprecation warning for using checkpoint files
  • Changed Inception ResNet V2 FP32 to use a frozen graph rather than checkpoints
  • Added support for custom volume mounts when running with docker
  • Moved model default env var configs to config.json files
  • Added support for dummy data with MobileNet V1 FP32
  • Added support for TCMalloc (enabled by default for int8 models)
  • Updated model zoo unit test to use json files for model parameters
  • Made the reference file optional for Transformer LT performance testing
  • Added iteration time to accuracy scripts
  • Updated Transformer LT Official to support num_inter and num_intra threads
  • Fixed path to the calibration script for ResNet101 Int8

New tutorials:

  • Transformer LT inference using TensorFlow
  • Transformer LT inference using TensorFlow Serving
  • ResNet50 Int8 inference using TensorFlow Serving
  • SSD-MobileNet inference using TensorFlow Serving

Documentation updates:

  • Added Contribute.md doc with instructions on adding new models
  • Added note about setting environment variables when running on bare metal
  • Updated model README files to use TensorFlow 1.14.0 docker images (except for Wide and Deep int8)
  • Updated FasterRCNN Int8 README file to clarify that performance testing uses raw images
  • Fixed docker build command in the TensorFlow Serving Installation Guide
  • NCF documentation update to remove line of code that causes an error
  • Updated mlperf/inference branch and paths in README file

Known issues:

  • RFCN FP32 accuracy is not working with the gcr.io/deeplearning-platform-release/tf-cpu.1-14 docker image
  • The TensorFlow Serving Installation Guide still shows example commands that build version 1.13. This will be updated to 1.14 when the official TensorFlow Serving release tag exists. To build version 1.14 now, you can use one of the following values for TF_SERVING_VERSION_GIT_BRANCH in your multi-stage docker build: "1.14.0-rc0" or "r1.14".