Skip to content

Commit

Permalink
Removed Optimization TFLite flag from quantize.py
Browse files Browse the repository at this point in the history
Removed the TFLite Optimization flag for the FP32 tflite version, as it produced a Hybrid model (both INT8 and FP32 operations), which is not supported by TFLM.
  • Loading branch information
fabrizioaymone authored Jun 8, 2024
1 parent 11d3beb commit 6ef0f97
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion benchmark/training/keyword_spotting/quantize.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
print(f"Converting trained model {Flags.saved_model_path} to TFL model at {Flags.tfl_file_name}")
model = tf.keras.models.load_model(Flags.saved_model_path)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

fp32_tfl_file_name = Flags.tfl_file_name[:Flags.tfl_file_name.rfind('.')] + '_float32.tflite'
tflite_float_model = converter.convert()
Expand Down

0 comments on commit 6ef0f97

Please sign in to comment.