Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Testing] Utility method to run TVM on remote device #15179

Merged
merged 1 commit into from
Jun 29, 2023

Conversation

junrushao
Copy link
Member

@junrushao junrushao commented Jun 29, 2023

This PR introduces tvm.testing.rpc_run, a utility method that allows a runtime.Module to run on a remote device via TVM RPC.

Example:

import numpy as np
import tvm
from tvm.script import tir as T
from tvm.testing import rpc_run

@T.prim_func
def cuda_kernel(
    A: T.Buffer((128,), "float32"),
    B: T.Buffer((128,), "float32"),
):
    for bx in T.thread_binding(4, thread="blockIdx.x"):
        for tx in T.thread_binding(32, thread="threadIdx.x"):
            x = bx * 32 + tx
            B[x] = A[x] + 1.0

def main():
    np_a = np.random.randn(128).astype("float32")
    np_b = np_a + 1.0

    rt_mod = tvm.build(cuda_kernel, target="nvidia/geforce-rtx-3090-ti")
    tvm_a, tvm_b = rpc_run(
        rt_mod,
        "cuda",
        [np_a, np_b],
    )

    assert np.allclose(tvm_b, np_b)

Result:

Execution time summary:
 mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
  0.0023       0.0023       0.0023       0.0023       0.0000

@tvm-bot
Copy link
Collaborator

tvm-bot commented Jun 29, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

  • No users to tag found in teams: testing See #10317 for details

Generated by tvm-bot

@junrushao junrushao marked this pull request as ready for review June 29, 2023 04:15
This PR introduces `tvm.testing.rpc_run`, a utility method that allows a
`runtime.Module` to run on a remote device via TVM RPC.

Example:

```python
import numpy as np
import tvm
from tvm.script import tir as T
from tvm.testnig import rpc_run

@T.prim_func
def cuda_kernel(
    A: T.Buffer((128,), "float32"),
    B: T.Buffer((128,), "float32"),
):
    for bx in T.thread_binding(4, thread="blockIdx.x"):
        for tx in T.thread_binding(32, thread="threadIdx.x"):
            x = bx * 32 + tx
            B[x] = A[x] + 1.0

def main():
    np_a = np.random.randn(128).astype("float32")
    np_b = np_a + 1.0

    rt_mod = tvm.build(cuda_kernel, target="nvidia/geforce-rtx-3090-ti")
    tvm_a, tvm_b = rpc_run(
        rt_mod,
        "cuda",
        [np_a, np_b],
    )

    assert np.allclose(tvm_b, np_b)
```

Result:

```
Execution time summary:
 mean (ms)   median (ms)    max (ms)     min (ms)     std (ms)
   0.0023       0.0023       0.0023       0.0023       0.0000
```
Copy link
Contributor

@MasterJH5574 MasterJH5574 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@MasterJH5574 MasterJH5574 merged commit 9710d81 into apache:main Jun 29, 2023
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants