Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI] [Hexagon] Batch flatten slice op initial version #11522

Merged
merged 12 commits into from
Jun 30, 2022

Conversation

abhikran-quic
Copy link
Contributor

@abhikran-quic abhikran-quic commented Jun 1, 2022

This patch adds the initial python implementation batch flatten slice op for hexagon.

Slice ops are basically ops that make certain assumptions about the input and output dimensions and are expected to be called after the original op has been sliced according to those dimensions at the graph level.

cc @Lunderberg @cconvey @mehrdadh

@github-actions github-actions bot requested a review from Lunderberg June 1, 2022 10:51
@github-actions github-actions bot requested a review from mehrdadh June 1, 2022 16:03
@abhikran-quic
Copy link
Contributor Author

HI @Lunderberg , @cconvey : Could you please review this PR ?

Copy link
Contributor

@Lunderberg Lunderberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, just some general comments and nitpicks.

tests/python/contrib/test_hexagon/test_batch_flatten.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/test_batch_flatten.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/test_batch_flatten.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/infrastructure.py Outdated Show resolved Hide resolved
Copy link
Contributor Author

@abhikran-quic abhikran-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Lunderberg for your review. I have fixed the comments.

Right now CI is failing because of v69 support not being available in LLVM. Hopefully #11539 should address this and then the tests will pass in CI.

tests/python/contrib/test_hexagon/test_batch_flatten.py::TestBatchFlatten::test_batch_flatten[float16-input_shape3-n11c-1024c-1d-input_axis_sep0-nc-1d-output_axis_sep0] 'hexagonv69' is not a recognized processor for this target (ignoring processor)

tests/python/contrib/test_hexagon/test_batch_flatten.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/test_batch_flatten.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/test_batch_flatten.py Outdated Show resolved Hide resolved
@abhikran-quic
Copy link
Contributor Author

CI is passing now.

@abhikran-quic
Copy link
Contributor Author

Hi @Lunderberg @cconvey : Could you please review this PR for any more comments ?

@cconvey
Copy link
Contributor

cconvey commented Jun 17, 2022

@abhikran-quic : Apologies for being slow to reply. I'm hoping to review this early next week if that's helpful.

@abhikran-quic abhikran-quic force-pushed the batch_flatten_slice branch 2 times, most recently from 199bd85 to e24bbd6 Compare June 21, 2022 07:11
@abhikran-quic
Copy link
Contributor Author

Hi @Lunderberg @cconvey @mehrdadh,
Can you please review this PR ? It's ready to be merged from my side.

@cconvey
Copy link
Contributor

cconvey commented Jun 23, 2022

Can you please review this PR ? It's ready to be merged from my side.

Reviewing now.

Copy link
Contributor

@Lunderberg Lunderberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking one more read-through after the latest commits, and found a couple more nitpicky changes.

Copy link
Contributor Author

@abhikran-quic abhikran-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Lunderberg and @jverma-quic for your comments. I've fixed them in the latest commit.

python/tvm/topi/hexagon/utils.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/infrastructure.py Outdated Show resolved Hide resolved
Copy link
Contributor

@Lunderberg Lunderberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for making the changes, and LGTM!

Copy link
Contributor

@cconvey cconvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! One minor comment, but feel free to ignore it. We can revisit the issue later if necessary.


batch_flatten_func = te.create_prim_func([inp, out])
sch = tir.Schedule(batch_flatten_func, debug_mask="all")
compute = sch.get_block("compute")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit suspicious about assuming that there's a block named "compute", as I don't see any promises in the documentation about the name and what it represents. But making assumptions like this seems somewhat idiomatic within TVM, so IMHO it's okay enough.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree with you and don't like it either. But, there doesn't seem to be a way to specify a different block name for topi defined ops. Please correct me if I'm wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I'm reusing batch_flatten compute in topi , the block name compute comes up. Sharing the schedule below

@main = primfn(var_A: handle, var_compute: handle) -> ()
  attr = {"global_symbol": "main", "tir.noalias": True}
  buffers = {A: Buffer(A_1: Pointer(global float16), float16, [1, 1, 1, 2, 1024], [], axis_separators=[4]),
             compute: Buffer(compute_1: Pointer(global float16), float16, [1, 2, 1024], [], axis_separators=[2])}
  buffer_map = {var_A: A, var_compute: compute} {
  block([], "root") {
    tir.reads([])
    tir.writes([])
    for (i0: int32, 0, 1) {
      for (i1_0_0: int32, 0, 1) {
        for (i1_0_1: int32, 0, 1) {
          for (i1_1_0: int32, 0, 2) {
            for (i1_1_1_0: int32, 0, 16) {
              for (i1_1_1_1: int32, 0, 64) "vectorized" {
                block([1, 2048], "compute") as [i, j] {
                  bind(i, 0)
                  bind(j, (((i1_1_0*1024) + (i1_1_1_0*64)) + i1_1_1_1))
                  tir.reads([A[0, 0, 0, (j / 1024), (j % 1024)]])
                  tir.writes([compute[0, (j / 1024), (j % 1024)]])
                  compute[0, (j / 1024), (j % 1024)] = A[0, 0, 0, (j / 1024), (j % 1024)]
                }
              }
            }
          }
        }
      }
    }
  }
  }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@abhikran-quic
Copy link
Contributor Author

@Lunderberg , @mehrdadh , @cconvey : Could you please merge this PR ? I've fixed merge conflicts.

@abhikran-quic
Copy link
Contributor Author

Gentle reminder! Please help in merging this PR. I want to raise another PR that's dependent on this one.

@mehrdadh
Copy link
Member

@abhikran-quic please resolve the conflict by rebasing with main and push into this branch.

@abhikran-quic
Copy link
Contributor Author

@abhikran-quic please resolve the conflict by rebasing with main and push into this branch.

@mehrdadh : I have resolved merge conflicts. Could you please review this ?

@kparzysz-quic kparzysz-quic merged commit 915c23b into apache:main Jun 30, 2022
@abhikran-quic abhikran-quic deleted the batch_flatten_slice branch June 30, 2022 14:38
blackkker pushed a commit to blackkker/tvm that referenced this pull request Jul 7, 2022
* [TOPI] [Hexagon] Batch flatten slice op initial version

* Fix lint errors

* Fix more lint errors

* Fix lint warnings

* Fix review comments

* Update tests to use util functions

* Update __init__.py

* Fix review comments
masahi pushed a commit to masahi/tvm that referenced this pull request Jul 15, 2022
* [TOPI] [Hexagon] Batch flatten slice op initial version

* Fix lint errors

* Fix more lint errors

* Fix lint warnings

* Fix review comments

* Update tests to use util functions

* Update __init__.py

* Fix review comments
mikeseven pushed a commit to mikeseven/tvm that referenced this pull request Sep 27, 2023
* [TOPI] [Hexagon] Batch flatten slice op initial version

* Fix lint errors

* Fix more lint errors

* Fix lint warnings

* Fix review comments

* Update tests to use util functions

* Update __init__.py

* Fix review comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants