Skip to content

Commit

Permalink
fold quantize in convert
Browse files Browse the repository at this point in the history
Differential Revision: D61814397

Pull Request resolved: pytorch#4889
  • Loading branch information
mcr229 authored Aug 28, 2024
1 parent 69472e5 commit 5395ae6
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion examples/models/phi-3-mini/export_phi-3-mini.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ def export(args) -> None:
)
model = prepare_pt2e(model, xnnpack_quantizer) # pyre-fixme[6]
model(*example_inputs)
model = convert_pt2e(model, fold_quantize=False)
model = convert_pt2e(model)
DuplicateDynamicQuantChainPass()(model)
# TODO(lunwenh): update it to use export once
# https://github.com/pytorch/pytorch/issues/128394 is resolved.
Expand Down

0 comments on commit 5395ae6

Please sign in to comment.