Skip to content

Commit

Permalink
Update quantization-aware-training-pytorch.rst (openvinotoolkit#26414)
Browse files Browse the repository at this point in the history
The suggested learning rate in this document seems wrong. It should be
1e-5 rather than 10e-5.

### Details:
 - *item1*
 - *...*

### Tickets:
 - *ticket-id*
  • Loading branch information
Huairui authored Sep 6, 2024
1 parent 1b98e2d commit 626571d
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Quantize the model using the :doc:`Post-Training Quantization <../quantizing-mod
2. Fine-tune the Model
########################

This step assumes applying fine-tuning to the model the same way it is done for the baseline model. For QAT, it is required to train the model for a few epochs with a small learning rate, for example, 10e-5.
This step assumes applying fine-tuning to the model the same way it is done for the baseline model. For QAT, it is required to train the model for a few epochs with a small learning rate, for example, 1e-5.
Quantized models perform all computations in floating-point precision during fine-tuning by modeling quantization errors in both forward and backward passes.

.. doxygensnippet:: docs/optimization_guide/nncf/code/qat_torch.py
Expand Down

0 comments on commit 626571d

Please sign in to comment.