Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Op] Trilu operator implementation #12124

Merged
merged 7 commits into from
Aug 2, 2022
Merged

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented Jul 18, 2022

This PR adds a new operator that supports triangular masking similar to that in np.triu and np.tril. The addition of relay.trilu conveniently lets us pass many of the remaining onnx tests.

@jwfromm
Copy link
Contributor Author

jwfromm commented Jul 18, 2022

@sfvaroglu Can you take a look at this PR?

@sfvaroglu
Copy link
Contributor

LGTM! Thanks for doing this @jwfromm! cc @mikepapadim

@mikepapadim
Copy link
Contributor

thanks @jwfromm, if this one is merged then we should close following as depreceated #11761 #10873 and #9329

@jwfromm jwfromm force-pushed the trilu_op branch 2 times, most recently from 221026d to cc2864e Compare July 25, 2022 16:24
"test_tril_neg",
"test_tril_one_row_neg",
"test_tril_out_neg",
"test_tril_out_pos",
"test_tril_zero",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked at this op at all. How tricky would it be to support the zero case? Otherwise LGTM.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It actually works on llvm and cuda. I was testing on my macbook and it seems like the metal backend in general doesnt support empty tensors. I think for CI we could add these cases.

Copy link
Contributor Author

@jwfromm jwfromm Jul 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like it also doesnt work with nvptx for the same issue with empty tensors. I'll add them here and see how it does in CI.

Copy link
Contributor

@shingjan shingjan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Thanks for sending this in! Our pytorch frontend could use this new Trilu op as well. Just one nit.

python/tvm/relay/op/transform.py Show resolved Hide resolved
@jwfromm
Copy link
Contributor Author

jwfromm commented Jul 27, 2022

I added pytorch testing and integration. Thanks for the recommendation @shingjan.

@jwfromm
Copy link
Contributor Author

jwfromm commented Jul 27, 2022

Unfortunately the empty tensor tests still fail on CI GPUs. I'm not sure why, it doesnt seem like its directly related to this PR so I'm going to reenable skips for those tests.

@jwfromm
Copy link
Contributor Author

jwfromm commented Aug 2, 2022

@mbrookhart I think this is ready to merge.

@mbrookhart mbrookhart merged commit b8893b5 into apache:main Aug 2, 2022
@mbrookhart
Copy link
Contributor

Thanks @jwfromm @sfvaroglu @mikepapadim @shingjan

xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
* Added topi trilu implementation

* Implemented and tested full Trilu op.

* Fix test type.

* Add tril zero tests.

* Add pytorch trilu integration.

* Clean up torch integration.

* Readded skip for zero tests.
@jwfromm jwfromm deleted the trilu_op branch April 12, 2023 15:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants