Skip to content

Commit

Permalink
Add torch-fuse-quantized-ops pass to the torch-to-iree pipeline (iree…
Browse files Browse the repository at this point in the history
…-org#17908)

The torch to iree pipeline currently does not use
`--torch-fuse-quantized-ops`, which is the cause of significant
discrepancies between model testing with iree-compile from torch IR and
model testing which first lowers to linalg with torch-mlir before
compiling. Together with `--torch-fuse-quantized-ops`, a newer pass
`--torch-scalarize-shapes` is added to the `torch-to-iree` pipeline to
keep in line with the
`--torch-backend-to-linalg-on-tensors-backend-pipeline`.

---------

Signed-off-by: zjgarvey <zjgarvey@gmail.com>
Signed-off-by: Lubo Litchev <lubol@google.com>
  • Loading branch information
zjgarvey authored and LLITCHEV committed Jul 30, 2024
1 parent 2f672aa commit 7feb0b2
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions compiler/plugins/input/Torch/InputConversion/Passes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ void createTorchToIREEPipeline(
mlir::torch::TorchConversion::createConvertCustomQuantOpPass());
pm.addNestedPass<func::FuncOp>(
torch::Torch::createDecomposeComplexOpsPass(emptyArrayRef));
pm.addNestedPass<func::FuncOp>(torch::Torch::createFuseQuantizedOpsPass());
pm.addNestedPass<func::FuncOp>(torch::Torch::createScalarizeShapesPass());
pm.addNestedPass<func::FuncOp>(torch::createConvertTorchToTMTensorPass());
pm.addNestedPass<func::FuncOp>(
TorchInput::createConvertTMTensorToLinalgExtPass());
Expand Down

0 comments on commit 7feb0b2

Please sign in to comment.