Skip to content

TensorFlow MobilenetV2 1.4

Compare
Choose a tag to compare
@quic-bharathr quic-bharathr released this 29 Dec 20:54
· 30 commits to develop since this release

General

Post QAT checkpoint for mobilenetv2-1.4. Quantization was done after Batch Norm folding, with tf quant scheme encodings, and the default configuration file. Note that this checkpoint has Batch Norms folded.

Quantized Accuracy: 74.11%

Quantizer Op Assumptions

In the evaluation script included, we have used the default config file, which configures the quantizer ops with the following assumptions:

  • Weight quantization: 8 bits, asymmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits, asymmetric quantization
  • Model inputs are not quantized
  • Operations which shuffle data such as reshape or transpose do not require additional quantizers

Contents

The tarball contains the following files:
checkpoint – Text file for TensorFlow to find latest checkpoint
model.data-00000-of-00001 model.index model.meta - Model checkpoint and meta files