Skip to content

Commit

Permalink
Fix FQ mask in 8da4w QAT (#199)
Browse files Browse the repository at this point in the history
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
  • Loading branch information
andrewor14 and jerryzh168 authored May 3, 2024
1 parent 5364de6 commit be30a7f
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion torchao/quantization/prototype/qat.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ def forward(ctx, input, scales, zero_points, quant_min, quant_max):
q = input.div(scales).add(zero_points).round()
dq = q.clamp(quant_min, quant_max).sub(zero_points).mul(scales)
# TODO: do we need this mask?
mask = torch.logical_and((q >= quant_min), (dq <= quant_max))
mask = torch.logical_and((q >= quant_min), (q <= quant_max))
ctx.save_for_backward(mask)
return dq

Expand Down

0 comments on commit be30a7f

Please sign in to comment.