Skip to content

Commit

Permalink
Fix FQ mask in 8da4w QAT
Browse files Browse the repository at this point in the history
  • Loading branch information
andrewor14 committed May 1, 2024
1 parent 6ae2c0b commit fec8510
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion torchao/quantization/prototype/qat.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ def forward(ctx, input, scales, zero_points, quant_min, quant_max):
q = input.div(scales).add(zero_points).round()
dq = q.clamp(quant_min, quant_max).sub(zero_points).mul(scales)
# TODO: do we need this mask?
mask = torch.logical_and((q >= quant_min), (dq <= quant_max))
mask = torch.logical_and((q >= quant_min), (q <= quant_max))
ctx.save_for_backward(mask)
return dq

Expand Down

0 comments on commit fec8510

Please sign in to comment.