Skip to content
This repository has been archived by the owner on May 11, 2024. It is now read-only.

Releases: intel/tools

Release 1.1.0

05 Aug 16:43
Compare
Choose a tag to compare

This release of Intel® AI Quantization Tools for TensorFlow* 1.1.0 is released under v1.1.0 tag (https://github.com/IntelAI/tools/tree/v1.1.0). Please notes that Intel® AI Quantization Tools for TensorFlow* requires Intel® Optimizations for TensorFlow. Intel® AI Quantization Tools for Tensorflow 1.1.0 is the last official release and the code has been migrated to Intel® Low Precision Inference Tool (iLiT) as one of backend framework engines. This revision contains the following features and fixes:

New functionality:

  • Add experimental KL Divergence and Moving Average algorithms to the tool

  • Add support using the Intel® AI Quantization Tool for TensoFlow* Python Programming APIs for the following out-of-box models:

    • ResNet-50v1.0

    • ResNet-50v1.5

    • ResNet-101

    • SSD-ResNet34

    • MobileNetv1

    • SSD-MobileNet

    • Faster-RCNN

    • R-FCN

    • Inception_v3

    • inception_v1

    • inception_v2

    • inception_v4

    • vgg_16

    • vgg_19

    • mobilenet_v2

    • mobilenet_v1

    • resnet_v1_152

    • resnet_v1_50

    • mask_rcnn_resnet50 (not for ITF2.1)

    • mask_rcnn_resnet101 (not for ITF2.1)

    • mask_rcnn_inception_v2

    • mask_rcnn_inception_resnet_v2 (not for ITF2.1)

    • faster_rcnn_resnet101

    • rfcn_resnet101

    • faster_rcnn_resnet50

    • faster_rcnn_inception_v2

    • ssd_resnet50_v1

    • ssd_mobilenet_v1

    • resnet_v2_101

    • resnet_v2_152

    • resnet_v2_50

    • inception_resnet_v2

  • Add Convolution/FusedBatchnormv3 fusion supporting

Bug fixes:

  • Fix several bugs introduced by quantize_graph refactor.

  • Fix the flake8 errors.

Release 1.0.0

22 Apr 00:44
Compare
Choose a tag to compare

Intel® AI Quantization Tools for Tensorflow 1.0 supports Intel® Optimized TensorFlow v1.15.0, v2.0.0 and v2.1.0, which added support using the Intel® AI Quantization Tool for TensoFlow* Python Programming APIs for the out-of-box models. For TensorFlow 2.1.0, we also added s8 support.

New functionality:

  • Add Convolution s8 support for TensorFlow 2.1.
  • Add optimize_for_inference() equivalent logic to the tool.
  • Add support using the Intel® AI Quantization Tool for TensoFlow* Python Programming APIs for the following out-of-box models:
    • inception_v1
    • inception_v2
    • inception_v4
    • vgg_16
    • vgg_19
    • resnet_v1_152
    • resnet_v1_50
  • Add support for TensorFlow 2.1.

Bug fixes:

  • Fix bugs found in s8 test.
  • Fix several bugs introduced by quantize_graph refactor.
  • Fix the flake8 errors.

Release 1.0b

07 Feb 08:50
Compare
Choose a tag to compare
Release 1.0b Pre-release
Pre-release

This release of Intel® AI Quantization Tools for Tensorflow* 1.0 Beta is released under v1.0b tag (https://github.com/IntelAI/tools/tree/v1.0b). Please notes that Intel® AI Quantization Tools for Tensorflow* must depend on Intel® Optimizations for Tensorflow. This revision contains the following features and fixes:

New functionality:

• Add support .whl Pip and conda installation for Python3.4/3.5/3.6/3.7, and remove the Tensorflow source build dependency in Intel ® optimizations for Tensorflow1.14.0, 1.15.0 and 2.0.0.
• Add three entries to run the quantization for specific models under api/examples/, including bash command for model zoo, bash command for custom model,
and python programming APIs direct call.
• Add Dockerfile for user to build the docker container.
• Add debug mode, and exclude ops and nodes in Python programming APIs.
• Add the Bridge interface with Model Zoo for Intel® Architecture.
• Add the Python implementation of summarize_graph to remove the dependency of Tensorflow source.
• Add the Python implementation of freeze_min/max, freeze_requantization_ranges,
fuse_quantized_conv_and_requantize, rerange_quantized_concat, Insert_logging of Transform_graph to remove the dependency of Tensorflow source build.
• Add per-channel support.
• Add support using the Intel® AI Quantization Tool for Tensoflow* Python Programming APIs for the following models:
**ResNet50
**ResNet50 v1.5
**SSD-MobileNet
**SSD-ResNet34
**ResNet101
**MobileNet
**Inception V3
**Faster-RCNN
**RFCN
• Add new procedures in README
• Support for Tensorflow1.14.0.
• Support for Tensorflow1.15.0.
• Support for Tensorflow 2.0.
• Support for Model Zoo for Intel® Architecture 1.5.

Bug fixes:

• Fix several bugs for Python rewrite transform_graph ops.
• Fix data types issue for optimize_for_inference.
• Fix the bug for MobileNet and ResNet101.
• Clean up the hardcode for Faster-RCNN and RFCN.
• Fix the pylint errors.

v0.4.0

06 Dec 00:52
Compare
Choose a tag to compare

This release has enabled easy-to-use “Intel® AI Quantization Tool for Tensoflow* Python Programming APIs” automating the quantization process and improve user experience, which support for Tensorflow 1.14.0 and 1.15.0 release.

New functionality:

  • Enable easy-to-use “Intel® AI Quantization Tool for Tensoflow* Python Programming APIs” automating the quantization process and improve user experience.

  • Add support using the Intel® AI Quantization Tool for Tensoflow* Python Programming APIs for the following models:

    • ResNet50

    • ResNet50 v1.5

    • SSD-MobileNet

    • SSD-ResNet34

  • Add the Intel® AI Quantization Tool for Tensoflow* quantization procedure document.

  • Add support for Tensorflow1.14.0

  • Add support for Tensorflow1.15.0.

  • Add support for Model Zoo for Intel® Architecture 1.4.

Bug fixes:

  • Enhance the fuse PAD op for ResNet50v1.5.

  • Fix the docker build.

  • Fix the bug for Wide & Deep.

  • Fix the bug for ResNet101.

  • Fix the pylint errors.

  • Fix the BUILD for graph_ transform.

  • Fix re-quantize fusion for non-perchannel.

  • Fix re-quantize op fusion to be more generic.

  • Fix per-channel quantization.

  • Fix excluded ops and nodes.

  • Fix the bugs for integration test.

  • Clean up unused files

v0.1.5

18 May 00:34
Compare
Choose a tag to compare

This release fixes issues for requantize operator fusion for MiniGo. It also includes offline bias quantization in the settings of per-channel quantization of weights.

v0.1.0

03 Apr 03:20
90433ef
Compare
Choose a tag to compare

Release 0.1.0

Overview

  • Initial release of Intel AI Quantization Tools for TensorFlow
  • Dockerfile to build quantization tools
  • Script to launch quantization
  • Quantize graph python script
  • Test scripts for the quantization workflow

Graph Transforms

  • fold_convolutionwithbias_mul
  • fold_subdivmul_batch_norms
  • fuse_quantized_conv_and_requantize
  • mkl_fuse_pad_and_conv
  • rerange_quantized_concat

Test Scripts

  • faster_rcnn
  • inceptionv3
  • inceptionv4
  • inception_resnet_v2
  • rfcn
  • resnet101
  • resnet50
  • ssd_mobilenet