Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paged attention fusion issues #19893

Open
4 tasks
IanWood1 opened this issue Feb 3, 2025 · 3 comments
Open
4 tasks

Paged attention fusion issues #19893

IanWood1 opened this issue Feb 3, 2025 · 3 comments

Comments

@IanWood1
Copy link
Contributor

IanWood1 commented Feb 3, 2025

We want to be able to fuse https://gist.github.com/Groverkss/36503c2f3163a23b2136c81170c9143c but there are several reshape problems blocking this.

  • (1) Fold reshape ops with hal.tensor.barrier and don't bubble reshapes used by hal.tensor.export ops. If the inputs/outputs contain unit dims, they will get propagated throughout the program. For example:
%expanded = tensor.expand_shape %19 [[0, 1], [2], [3]] output_shape [1, 64, 10, 128] : tensor<64x10x128xf16> into tensor<1x64x10x128xf16>
%20 = hal.tensor.barrier join(%expanded : tensor<1x64x10x128xf16>) => %arg5 : !hal.fence
%21 = hal.tensor.export %20 : tensor<1x64x10x128xf16> -> !hal.buffer_view
%collapsed_0 = tensor.collapse_shape %13 [[0, 1], [2], [3, 4]] : tensor<16x4x128x128x32xf16> into tensor<64x128x4096xf16>
%expanded_1 = tensor.expand_shape %collapsed_0 [[0, 1], [2], [3]] output_shape [4, 16, 128, 4096] : tensor<64x128x4096xf16> into tensor<4x16x128x4096xf16>

@MaheshRavishankar @Groverkss

@MaheshRavishankar
Copy link
Contributor

@Groverkss or @manupak could one of you look at issue 4 here?

@IanWood1
Copy link
Contributor Author

IanWood1 commented Feb 11, 2025

8b3ba8d "fixes" the problem with (4) but im unsure if its correct in general

@IanWood1
Copy link
Contributor Author

(4) should be complete after #19838 and #19962

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants