-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
void TVMBroadcastCompute(const nnvm::NodeAttrs& attrs, | ||
const mxnet::OpContext& ctx, | ||
const std::vector<TBlob>& inputs, | ||
const std::vector<OpReqType>& req, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why req is ignored?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just an example to show how it works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean does tvm have good support for all kinds of OpReqType, like inplace, addto?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good question. I don't have a good answer to it as for now.
@@ -734,3 +734,8 @@ def write_all_str(module_file, module_all_list): | |||
|
|||
ctypes.pythonapi.PyCapsule_New.restype = ctypes.py_object | |||
ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p | |||
|
|||
from .runtime import Features | |||
if Features().is_enabled("TVM_OP"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems TVM only supports python because of this code. Shall we load libtvmop automatically along with loading libmxnet? Then we don't do extra change with front-end language.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be doable. while might need non-trivial surgery to both. cc @tqchen if you have insight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry Yizhi, but I'm still waiting for cmake support. We are making more and more changes in make.
Thus, before we proceed, please add cmake.
@marcoabreu cmake support added |
src/operator/tvmop/op_module.cc
Outdated
*module_ptr_ = module; | ||
} | ||
|
||
inline PackedFunc GetFunction(const std::shared_ptr<Module> &module, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please no inline.
Please review again @tqchen @marcoabreu @ZhennanQin @larroy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved regarding build-system
contrib/tvmop/opdef.py
Outdated
---------- | ||
name : str | ||
function name | ||
target : bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool ->str
@@ -185,6 +189,9 @@ enum : unsigned { | |||
SIGNAL_HANDLER, | |||
DEBUG, | |||
|
|||
// TVM operator | |||
TVM_OP, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM % minor comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM as the first integration PR. Also need incremental PRs to address below issues:
- Support
OpReqType
- Support Windows platform
- Support all front-end languages
- Propagate MXNet feature to libtvmop
@mxnet-label-bot add [Backend, Operator, pr-awaiting-response] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
.set_attr<mxnet::FCompute>("FCompute<cpu>", mxnet::op::TVMBroadcastCompute<func_vadd_cpu>) | ||
#if MXNET_USE_CUDA | ||
.set_attr<mxnet::FCompute>("FCompute<gpu>", mxnet::op::TVMBroadcastCompute<func_vadd_gpu>); | ||
#endif // MXNET_USE_CUDA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There will be a missing semicolon if MXNET_USE_CUDA is off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
may change to this
#if MXNET_USE_CUDA
.set_attr<mxnet::FCompute>("FCompute<gpu>", mxnet::op::TVMBroadcastCompute<func_vadd_gpu>)
#endif // MXNET_USE_CUDA
.set_attr<mxnet::FCompute>("FCompute<cpu>", mxnet::op::TVMBroadcastCompute<func_vadd_cpu>);
* intra to use tvm write op kernels * add cmake support for tvm op * fix header lint * cleanup USE_TVM_OP logic in Makefile * add doc, cmake def, etc. * fix doc * test rand shape * add with_seed to test * improve err msg. add #if
* intra to use tvm write op kernels * add cmake support for tvm op * fix header lint * cleanup USE_TVM_OP logic in Makefile * add doc, cmake def, etc. * fix doc * test rand shape * add with_seed to test * improve err msg. add #if
Description
This PR implements an infra to let users write op kernels in pure Python. One broadcast operator example can be found in,
More background and discussion can be found here: #15465