Skip to content

Commit

Permalink
[TIR][REFACTOR][API-CHANGE] Change Call.name to Call.op(RelayExpr) (#…
Browse files Browse the repository at this point in the history
…5863)

* [TIR][REFACTOR][API-CHANGE] Change Call.name(string) to Call.op(tvm::Op/RelayExpr)

This PR brings a major refactor to the tir::Call structure.
The current Call structure uses a string field(name) to identify the
function/intrinsic being called. This approach is limited as we start
to expand TIR to be more structured. In particular, we are interested in
the following aspects:

- Type a function and perform better compile time type checking so that we
  can find errors early.
- Register additional properties about an operator, such as:
  - Whether an intrinsic can be vectorized
  - What is the adjoint function of the intrinsic(for tensor expression AD)
  - Whether the operator has side effect.
- Perform specific codegen about an intrinsic if necessary.
- Call into another function in the same module.

The refactor changes the Call.name field to Call.op.
The Call.op field has a RelayExpr type, and we can pass:

- A tvm::Op which represents the corresponding intrinsic.
- A tvm::GlobalVar for calling into another function in the IRModule.

All the current intrinsics are migrated by registering an tvm::Op.
Because the unified IR shares a single Op registry. We use the "tir"
namespace for tir related intrinsics, for example bitwise and is now registered
under `tir.bitwise_and`.

To simplify upgrade, we introduce a `tir.call_extern` intrinsic
that allows us to call into arbitary external function without type checking.
However, we should move towards more type checked variants in the system.

Under the new op design. We should no longer try to pattern match all the
specific intrincis. Instead, we should rely on attr of each Op to do transformation.
For example, the vectorization pass depends on the TVectorizable property of the op,
which can be registered independently.

In this way, we can still grow the number of intrinsics when necessary
without having to change all the passes.

The same rule applies for tensor expression AD. Currently we are performing
AD by pattern match on operators like exp, sin, cos. We should instead
change to the ajoint registeration mechanism like those in relay.

Followup refactors need to be performed, including:
- Fold the Call.call_type into operator's attribute.
- Enrich the operator registry information
- Refactor passes(e.g. AD, intrin lowering) to use the attribute based transformation

* Fix nms

* Fix remaining testcase

* Address review comment
  • Loading branch information
tqchen authored Jun 23, 2020
1 parent 6fea4bd commit 82d157f
Show file tree
Hide file tree
Showing 120 changed files with 1,965 additions and 1,168 deletions.
2 changes: 1 addition & 1 deletion include/tvm/relay/expr.h
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ class CallNode : public ExprNode {
/*!
* \brief The operator(function) being invoked
*
* - It can be relay::Op which corresponds to the primitive operators.
* - It can be tvm::Op which corresponds to the primitive operators.
* - It can also be user defined functions (Function, GlobalVar, Var).
*/
Expr op;
Expand Down
Loading

0 comments on commit 82d157f

Please sign in to comment.