Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Enable GPU in ONNX importer tests #7438

Merged
merged 17 commits into from
Mar 30, 2021
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 12 additions & 2 deletions src/target/source/codegen_c.cc
Original file line number Diff line number Diff line change
Expand Up @@ -525,7 +525,18 @@ void CodeGenC::VisitExpr_(const DivNode* op, std::ostream& os) { // NOLINT(*)
PrintBinaryExpr(op, "/", os, this);
}
void CodeGenC::VisitExpr_(const ModNode* op, std::ostream& os) { // NOLINT(*)
PrintBinaryExpr(op, "%", os, this);
if (op->dtype.is_int() || op->dtype.is_uint()) {
PrintBinaryExpr(op, "%", os, this);
} else {
ICHECK(op->dtype.is_float()) << "non integer or floating point datatype in Mod";
if (op->dtype.bits() == 32) {
PrintBinaryExpr(op, "fmodf", os, this);
} else if (op->dtype.bits() == 64) {
PrintBinaryExpr(op, "fmod", os, this);
} else {
ICHECK(false) << "non single or double precision floating point in Mod";
}
}
}
void CodeGenC::VisitExpr_(const MinNode* op, std::ostream& os) { // NOLINT(*)
PrintBinaryExpr(op, "min", os, this);
Expand Down Expand Up @@ -728,7 +739,6 @@ void CodeGenC::VisitStmt_(const StoreNode* op) {
ICHECK(is_one(op->predicate)) << "Predicated store is not supported";
arith::PVar<PrimExpr> base;


if (arith::ramp(base, 1, t.lanes()).Match(op->index)) {
std::string value = this->PrintExpr(op->value);
this->PrintVecStore(op->buffer_var.get(), t, base.Eval(), value);
Expand Down
5 changes: 2 additions & 3 deletions tests/python/frontend/onnx/test_forward.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,15 +67,13 @@ def get_tvm_output(
graph_def, input_data, target, ctx, output_shape=None, output_dtype="float32", opset=None
):
""" Generic function to execute and get tvm output"""
target = "llvm"

input_names, shape_dict = get_input_data_shape_dict(graph_def, input_data)

mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
with tvm.transform.PassContext(opt_level=1):
graph, lib, params = relay.build(mod, target, params=params)

ctx = tvm.cpu(0)
m = graph_runtime.create(graph, lib, ctx)
# set inputs
if isinstance(input_data, list):
Expand Down Expand Up @@ -1459,7 +1457,8 @@ def verify_argreduce(input_dim, op_name, axis=None, keepdims=None):
verify_with_ort_with_inputs(model, [a_np1])


@tvm.testing.uses_gpu
# TODO (mbrookhart, electriclilies) Fix argmin on GPU and enable this test
# @tvm.testing.uses_gpu
def test_forward_arg_min_max():
"""Verify argmin and argmax"""
verify_argreduce([3, 4, 4], "ArgMin")
Expand Down
3 changes: 3 additions & 0 deletions tests/scripts/task_python_frontend.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ find . -type f -path "*.pyc" | xargs rm -f
# Rebuild cython
make cython3

# Only run GPU enabled tests on GPU
export PYTEST_ADDOPTS="-m gpu $PYTEST_ADDOPTS"
Copy link
Member

@masahi masahi Mar 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure we want this? I think this is skipping a lot of tests that were previously running.

It seems there are many tests that don't use @tvm.testing.uses_gpu.

def test_detection_models():

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably we should use targets option in verify_with_ort_with_inputs for now

See for example

verify_with_ort_with_inputs(
model, [indata], targets=["llvm"], dtype="int64", use_vm=True, opset=9
)
(we should test this function on GPU, btw)

Copy link
Contributor Author

@electriclilies electriclilies Mar 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, all the tests are running on GPU, which causes the argmin / argmax onnx test to fail. The ONNX test file uses @tvm.testing.uses_gpu to indicate whether we should run tests on GPU or not, but right now those decorators don't actually do anything, which is definitely not good. I think we should try to move towards enabling the @tvm.testing.uses_gpu in the long run.

What if I set PYTEST_ADDOPTS before the onnx tests, and reset it after? Since the file uses the @tvm.testing.uses_gpu a lot, I think we should either remove the decorators or let them actually do something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I think that hardcoding targets is bad because it's not easy to see -- that's what caused none of the ONNX tests to be run on GPU in the first place.

Copy link
Member

@masahi masahi Mar 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, surprised to hear that use_gpu doesn't do anything. Let's make it

PYTEST_ADDOPTS="-m gpu $PYTEST_ADDOPTS" run_pytest cython python-frontend-onnx tests/python/frontend/onnx

This should only modify env vars when running the onnx test.

Also, can you go through TODO(mbrookhart) in the onnx frontend test and add use_gpus? There are about 10-15 of them.

# TODO(mbrookhart): enable once VM supports heterogenous execution

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doh! I thought I had removed all of those comments, but I must have only done it for the dynamic test files and missed the onnx file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can open a separate PR to remove these.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I already did it!


echo "Running relay MXNet frontend test..."
TVM_PYTHON_FFI_TYPES=cython run_pytest python-frontend-mxnet tests/python/frontend/mxnet

Expand Down