-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bart-large model support #225
Comments
@bjacob In iree code, what's the difference between: Here is the link to download all the mlir files: bart-large.default.torch-onnx.mlir bart-large.default.pytorch.torch.mlir bart-large.default.pytorch.linalg.mlir |
Sorry I'm unfamiliar with torch-mlir and probably not the best person to help here, but here is my best attempt. Looking into each error specifically, Run without --torchtolinalg get iree issue1bart-large.default.pytorch.torch.mlir:466:12: error: 'linalg.generic' op inferred input/output operand #1 has shape's dimension #1 to be 16, but found 1
%378 = torch.aten.add.Tensor %377, %239, %int1_71 : !torch.vtensor<[1,16,9,9],f32>, !torch.vtensor<[?,?,9,9],f32>, !torch.int -> !torch.vtensor<[?,16,9,9],f32>
^
bart-large.default.pytorch.torch.mlir:466:12: note: see current operation:
%748 = "linalg.generic"(%746, %642, %747) <{indexing_maps = [affine_map<(d0, d1, d2, d3) -> (0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>], iterator_types = [#linalg.iterator_type<parallel>, #linalg.iterator_type<parallel>, #linalg.iterator_type<parallel>, #linalg.iterator_type<parallel>], operandSegmentSizes = array<i32: 2, 1>}> ({
^bb0(%arg1: f32, %arg2: f32, %arg3: f32):
%2708 = "arith.addf"(%arg1, %arg2) <{fastmath = #arith.fastmath<none>}> : (f32, f32) -> f32
"linalg.yield"(%2708) : (f32) -> ()
}) : (tensor<1x16x9x9xf32>, tensor<1x1x9x9xf32>, tensor<1x16x9x9xf32>) -> tensor<1x16x9x9xf32> This is noting that the value
So the size of the dimention indexed by Since this Run with --torchtolinalg get iree issue2bart-large.default.pytorch.linalg.mlir:333:11: error: 'tensor.expand_shape' op expected dimension 0 of collapsed type to be dynamic since one or more of the corresponding dimensions in the expanded type is dynamic
%27 = linalg.generic {indexing_maps = [#map10, #map5], iterator_types = ["parallel", "parallel", "parallel"]} ins(%26 : tensor<?x9x1xf32>) outs(%24 : tensor<?x9x1xf32>) {
^
bart-large.default.pytorch.linalg.mlir:22:3: note: called from
func.func @main_graph(%arg0: tensor<1x9xi64>) -> (tensor<1x9x50265xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>, tensor<1x16x9x64xf32>) {
^
bart-large.default.pytorch.linalg.mlir:333:11: note: see current operation: %254 = "tensor.expand_shape"(%253) <{reassociation = [[0, 1]]}> : (tensor<9xf32>) -> tensor<?x9xf32>
%27 = linalg.generic {indexing_maps = [#map10, #map5], iterator_types = ["parallel", "parallel", "parallel"]} ins(%26 : tensor<?x9x1xf32>) outs(%24 : tensor<?x9x1xf32>) {
^ In this error message, the dumped input is already %27 = linalg.generic {indexing_maps = [#map10, #map5], iterator_types = ["parallel", "parallel", "parallel"]} ins(%26 : tensor<?x9x1xf32>) outs(%24 : tensor<?x9x1xf32>) { But the error says that its lowering includes an ill-formed bart-large.default.pytorch.linalg.mlir:333:11: error: 'tensor.expand_shape' op expected dimension 0 of collapsed type to be dynamic since one or more of the corresponding dimensions in the expanded type is dynamic
[...]
bart-large.default.pytorch.linalg.mlir:333:11: note: see current operation: %254 = "tensor.expand_shape"(%253) <{reassociation = [[0, 1]]}> : (tensor<9xf32>) -> tensor<?x9xf32> Indeed, the 1D static shape You could find which pass created it by rerunning this |
@Shukla-Gaurav With iree-compiler bump torch-mlir to onnx.resize patch iree-org/iree#17358:
|
torchtolinalg pipeline issue
Related iree issues: iree-org/iree#17021
The following two run should generate same issue, but it turns out not
Run without --torchtolinalg get iree issue1
python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --ireebuild ../../iree-build -f pytorch -g models --mode onnx --report --tests pytorch/models/bart-large
Coresponding command that run:
Use standalone torch-mlir-opt lowering onnx dialect to torch dialect, then use iree-compile lower torch dialect to vm
Option2: Run with
--torchtolinalg
get iree issue2python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --ireebuild ../../iree-build -f pytorch -g models --mode onnx --report --tests pytorch/models/bart-large --torchtolinalg
Coresponding command that run:
Use standalone torch-mlir-opt lowering onnx dialect to linalg dialect, then use iree-compile lower linalg dialect to vm
Here is the link to download all the mlir files:
bart-large.default.torch-onnx.mlir
bart-large.default.pytorch.torch.mlir
bart-large.default.pytorch.linalg.mlir
https://onnxstorage.blob.core.windows.net/onnxstorage/bugcases/torchtolinalgpipelineissue.zip
The text was updated successfully, but these errors were encountered: