Skip to content

Commit

Permalink
Merge branch 'meta-project:main' into multi-comm
Browse files Browse the repository at this point in the history
  • Loading branch information
Tonny-Gu authored Mar 5, 2022
2 parents 5407151 + 279dba8 commit eefd9eb
Show file tree
Hide file tree
Showing 35 changed files with 190 additions and 1,735 deletions.
19 changes: 14 additions & 5 deletions .github/workflows/ci_unit_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,8 @@ jobs:
update_ci_badge:
needs: [test_on_cpu, test_on_gpu, test_on_multi_gpu]
if: github.repository == 'meta-project/meta'
# Run this job whatever the unit tests were success or not.
if: ${{ always() && github.repository == 'meta-project/meta' }}
runs-on: self-hosted
steps:
- uses: haya14busa/action-workflow_run-status@v1
Expand All @@ -223,9 +224,17 @@ jobs:
echo "No need to update badge for PR CI. Skip."
exit 0
fi
head_commit=$(git rev-parse --short HEAD)
echo "::set-output name=gist_id::630a36600930c8d68e6b15f16333b532"
echo "::set-output name=message::${head_commit}"
head_commit=$(git rev-parse --short HEAD)
if [[ "${{ needs.test_on_cpu.result }}" == "success" &&
"${{ needs.test_on_gpu.result }}" == "success" &&
"${{ needs.test_on_multi_gpu.result }}" == "success" ]]; then
echo "::set-output name=message::passing (${head_commit})"
echo "::set-output name=color::success"
else
echo "::set-output name=message::failing (${head_commit})"
echo "::set-output name=color::critical"
fi
- name: Update CI badge
# Intentionally fail this step with empty gist_id.
uses: schneegans/dynamic-badges-action@v1.1.0
Expand All @@ -234,6 +243,6 @@ jobs:
auth: ${{ secrets.DEPLOY_ACCESS_TOKEN }}
gistID: ${{ steps.badge.outputs.gist_id }}
filename: raf-ci-badge-last-pass.json
label: CI-Last-Success
label: CI-UnitTests
message: ${{ steps.badge.outputs.message }}
color: blue
color: ${{ steps.badge.outputs.color }}
13 changes: 8 additions & 5 deletions ci/batch/cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,22 +48,25 @@ function config_cmake() {

# Compile for the given path.
function compile() {
set +e # Disable exit on error for this function.
BUILD_DIR=$1
PLATFORM=$2
JOB_TAG=$3

# Load ccache if available.
bash ./ci/batch/backup-ccache.sh download $PLATFORM $JOB_TAG || true
bash ./ci/batch/backup-ccache.sh download $PLATFORM $JOB_TAG

# Compile. Note that compilation errors will not result in crash in this function.
# We use return exit code to let the caller decide the action.
# Compile. Note that compilation errors will not result in crash in this function
# so that we could still upload ccache. This function instead returns the exit code
# to let the caller handle the error.
bash ./ci/task_clean.sh $BUILD_DIR
bash ./ci/task_build.sh $BUILD_DIR -j$(($(nproc) - 1)) || true
bash ./ci/task_build.sh $BUILD_DIR -j$(($(nproc) - 1))
RET=$?
echo "[CLI] Compiled at $BUILD_DIR"

# Backup the ccache.
bash ./ci/batch/backup-ccache.sh upload $PLATFORM $JOB_TAG || true
bash ./ci/batch/backup-ccache.sh upload $PLATFORM $JOB_TAG
set -e
return $RET
}

Expand Down
4 changes: 2 additions & 2 deletions ci/task_lint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ echo "Check Python formats using black..."
./scripts/lint/git-black.sh HEAD~1
./scripts/lint/git-black.sh origin/main

echo "Running pylint on python/raf and scripts/op_def"
python3 -m pylint python/raf scripts/op_def --rcfile=./scripts/lint/pylintrc
echo "Running pylint on python/raf"
python3 -m pylint python/raf --rcfile=./scripts/lint/pylintrc

echo "Running pylint on tests/python"
python3 -m pylint tests/python --rcfile=./scripts/lint/pytestlintrc
Expand Down
8 changes: 4 additions & 4 deletions docs/wiki/3_dev_guide/Add-Operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ Now let’s declare the behavior of the operator. There are two major behaviors

1. The shape of the output (inferred from the inputs’ shapes).
2. The output value, or the method to produce the value,
1. If this operator is compute intensive (e.g., `softmax`) and has to be offloaded to a
1. If this operator is compute intensive (e.g., `softmax`) and has to be offloaded to a
backend (CUDA, CuBLAS, CuDNN, TVM, LLVM, etc), then we only need to assign a tensor placeholder (with inferred shape, data type and device context) to `CallValues->out` to let RAF know how its output should be. Here is how we declare `softmax`, for example:

```c++
Expand Down Expand Up @@ -264,7 +264,7 @@ We now have a well-defined `softmax`. Let’s build RAF and run the operator:
ValueError: Cannot dispatch raf.op.softmax@cpu(0)
```

Oops, we still get an error...Recall that we attempt to offload this operator to a backend when implementing its declare, so we put a placeholder to its `call->out`. When RAF sees a placeholder in `call->out`, it tries to dispatch this operator to one of the available backends as the callee for execution. However, since we have not defined any backend for this operator, the dispatching was failed.
Oops, we still get an error...Recall that we attempt to offload this operator to a backend when implementing its declare, so we put a placeholder to its `call->out`. When RAF sees a placeholder in `call->out`, it tries to dispatch this operator to one of the available backends as the callee for execution. However, since we have not defined any backend for this operator, the dispatching was failed.

Formally, the operator we just defined is named **base operator** in RAF, which includes backend independent scheme and attributes. Meanwhile, the backend-specific operators are named **dialect operators**. One base operator can be associated with multiple dialect operators, and each of them is in charge of one backend execution. For example, the base operator `raf.op.softmax` has the following dialect operators in RAF:

Expand All @@ -282,7 +282,7 @@ Every base operator should have a Relay/TVM dialect operator implementation as t

##### The operator has an implementation in Relay

If an operator has a corresponding implementation in Relay, then we can simply add one line to [scripts/op_def/topi.py](https://github.com/meta-project/meta/blob/3977c035cd6571a4c2504be88701c39550b56d11/scripts/op_def/topi.py):
If an operator has a corresponding implementation in Relay, then we can simply add one line to [scripts/src_codegen/def_topi_map.py](https://github.com/meta-project/meta/blob/main/scripts/src_codegen/def_topi_map.py):

```python
OP_MAP = [
Expand Down Expand Up @@ -524,7 +524,7 @@ RAF_OP_FROM_RELAY("nn.softmax", "raf.op.softmax",
})
```
The first argument (`nn.softmax`) is the Relay op name; the second argument (`raf.op.softmax`) is the RAF op name. The third argument is a converter function, and its purpose is to map Relay arguments and attributes to the RAF arguments. In the case of `softmax`, we have:
The first argument (`nn.softmax`) is the Relay op name; the second argument (`raf.op.softmax`) is the RAF op name. The third argument is a converter function, and its purpose is to map Relay arguments and attributes to the RAF arguments. In the case of `softmax`, we have:
* Relay
* Arguments: x.
Expand Down
2 changes: 1 addition & 1 deletion include/raf/device.h
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ class Device : public ObjectRef {
static Device Current(bool allow_default = true);

public:
RAF_NOTNULLABLE_OBJECT_REF(Device, ir::ObjectRef, DeviceObj);
RAF_MUTABLE_NOTNULLABLE_OBJECT_REF(Device, ir::ObjectRef, DeviceObj);

private:
inline DeviceObj* self() const {
Expand Down
2 changes: 1 addition & 1 deletion include/raf/dist_context.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ class DistContext : public ir::ObjectRef {
public:
static DistContext make();
static DistContext Global();
RAF_OBJECT_REF(DistContext, ir::ObjectRef, DistContextObj);
RAF_MUTABLE_OBJECT_REF(DistContext, ir::ObjectRef, DistContextObj);
};

} // namespace distributed
Expand Down
67 changes: 27 additions & 40 deletions include/raf/ir.h
Original file line number Diff line number Diff line change
Expand Up @@ -195,57 +195,44 @@ constexpr const char* kPatternName = "PatternName";
} // namespace ir
} // namespace raf

#define RAF_BASE_OBJECT(TypeName, ParentType) \
#define RAF_BASE_OBJECT(TypeName, ParentType) TVM_DECLARE_BASE_OBJECT_INFO(TypeName, ParentType)

#define RAF_FINAL_OBJECT(TypeName, ParentType) TVM_DECLARE_FINAL_OBJECT_INFO(TypeName, ParentType)

#define RAF_FINAL_OBJECT_NOCHECK(TypeName, ParentType) \
static const constexpr bool _type_final = true; \
static const constexpr int _type_child_slots = 0; \
static uint32_t RuntimeTypeIndex() { \
static_assert(TypeName::_type_child_slots == 0 || ParentType::_type_child_slots == 0 || \
TypeName::_type_child_slots < ParentType::_type_child_slots, \
"Need to set _type_child_slots when parent specifies it."); \
if (TypeName::_type_index != ::tvm::runtime::TypeIndex::kDynamic) { \
return TypeName::_type_index; \
} \
return _GetOrAllocRuntimeTypeIndex(); \
} \
static uint32_t _GetOrAllocRuntimeTypeIndex() { \
static uint32_t tidx = GetOrAllocRuntimeTypeIndex( \
static uint32_t tindex = Object::GetOrAllocRuntimeTypeIndex( \
TypeName::_type_key, TypeName::_type_index, ParentType::_GetOrAllocRuntimeTypeIndex(), \
TypeName::_type_child_slots, TypeName::_type_child_slots_can_overflow); \
return tidx; \
return tindex; \
}

#define RAF_FINAL_OBJECT(TypeName, ParentType) \
static const constexpr bool _type_final = true; \
static const constexpr int _type_child_slots = 0; \
RAF_BASE_OBJECT(TypeName, ParentType)

#define RAF_OBJECT_REF(TypeName, ParentType, ObjectName) \
TypeName() { \
} \
explicit TypeName(::tvm::runtime::ObjectPtr<::tvm::runtime::Object> n) : ParentType(n) { \
} \
ObjectName* operator->() const { \
return static_cast<ObjectName*>(data_.get()); \
} \
using ContainerType = ObjectName;

#define RAF_NOTNULLABLE_OBJECT_REF(TypeName, ParentType, ObjectName) \
explicit TypeName(::tvm::runtime::ObjectPtr<::tvm::runtime::Object> n) : ParentType(n) { \
} \
ObjectName* operator->() const { \
return static_cast<ObjectName*>(data_.get()); \
} \
static constexpr bool _type_is_nullable = false; \
using ContainerType = ObjectName;

#define RAF_REGISTER_OBJECT_NO_REFLECT(TypeName) \
static DMLC_ATTRIBUTE_UNUSED uint32_t __make_Object_tidx##_##TypeName##__ = \
TypeName::_GetOrAllocRuntimeTypeIndex()

#define RAF_REGISTER_OBJECT_REFLECT(TypeName) \
RAF_REGISTER_OBJECT_NO_REFLECT(TypeName); \
static DMLC_ATTRIBUTE_UNUSED ::tvm::ReflectionVTable::Registry& __make_Node##_##TypeName##__ = \
::tvm::ReflectionVTable::Global() \
->Register<TypeName, ::tvm::detail::ReflectionTrait<TypeName>>() \
.set_creator( \
[](const std::string&) -> ::tvm::runtime::ObjectPtr<::tvm::runtime::Object> { \
return ::tvm::runtime::make_object<TypeName>(); \
})
#define RAF_OBJECT_REF(TypeName, ParentType, ObjectName) \
TVM_DEFINE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)

#define RAF_MUTABLE_OBJECT_REF(TypeName, ParentType, ObjectName) \
TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)

#define RAF_NOTNULLABLE_OBJECT_REF(TypeName, ParentType, ObjectName) \
TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)

#define RAF_MUTABLE_NOTNULLABLE_OBJECT_REF(TypeName, ParentType, ObjectName) \
TVM_DEFINE_MUTABLE_NOTNULLABLE_OBJECT_REF_METHODS(TypeName, ParentType, ObjectName)

#define RAF_REGISTER_OBJECT_NO_REFLECT(TypeName) TVM_REGISTER_OBJECT_TYPE(TypeName)

#define RAF_REGISTER_OBJECT_REFLECT(TypeName) TVM_REGISTER_NODE_TYPE(TypeName)

#include "./ir_ext.h"
#include "./dataflow_pattern.h"
8 changes: 7 additions & 1 deletion include/raf/op.h
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,13 @@ class CallValuesNode : public ir::Object {
mutable value::Value out;
mutable Device device;

public:
void VisitAttrs(tvm::AttrVisitor* v) {
v->Visit("callee", &callee);
v->Visit("args", &args);
v->Visit("out", &out);
v->Visit("device", &device);
}
static constexpr const uint32_t _type_index = ir::TypeIndex::kDynamic;
static constexpr const char* _type_key = "raf.op.CallValues";
RAF_FINAL_OBJECT(CallValuesNode, ir::Object);
};
Expand Down
2 changes: 1 addition & 1 deletion include/raf/serialization.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ namespace serialization {
class ConstantNode : public ir::ConstantNode {
public:
static constexpr const char* _type_key = "raf.ir.serialization.Constant";
RAF_FINAL_OBJECT(ConstantNode, ir::ConstantNode);
RAF_FINAL_OBJECT_NOCHECK(ConstantNode, ir::ConstantNode);
};

/*!
Expand Down
4 changes: 2 additions & 2 deletions include/raf/type.h
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ class TypeInferenceNode : public tvm::TypeConstraintNode {
}

static constexpr const char* _type_key = "TypeInference";
TVM_DECLARE_FINAL_OBJECT_INFO(TypeInferenceNode, TypeConstraintNode);
RAF_FINAL_OBJECT(TypeInferenceNode, TypeConstraintNode);
};

/*!
Expand All @@ -74,7 +74,7 @@ class TypeInferenceNode : public tvm::TypeConstraintNode {
class TypeInference : public tvm::TypeConstraint {
public:
explicit TypeInference(TypeInferenceFn func);
TVM_DEFINE_OBJECT_REF_METHODS(TypeInference, TypeConstraint, TypeInferenceNode);
RAF_OBJECT_REF(TypeInference, TypeConstraint, TypeInferenceNode);
};

OpType MakeOpType(const std::string& op_name, const std::string& fn_name,
Expand Down
3 changes: 2 additions & 1 deletion include/raf/value.h
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,8 @@ T GetScalarValueData(const Value& value) {
return ivo->value;
} else if (const auto* tvo = value.as<TensorValueObj>()) {
tensor::Tensor tensor = tvo->tensor;
CHECK_EQ(tensor->ndim, 0U) << "Value is not a scalar";
CHECK(tensor->ndim == 0U || (tensor->ndim == 1U && tensor->shape[0] == 1))
<< "Value is not a scalar";

DataType dtype = DataType(tensor->dtype);
NDArray nd_array;
Expand Down
2 changes: 1 addition & 1 deletion include/raf/vm/value.h
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ class StorageValue final : public Value {
public:
static StorageValue make(std::shared_ptr<memory_pool::Memory> buffer);

RAF_OBJECT_REF(StorageValue, Value, StorageValueObj);
RAF_MUTABLE_OBJECT_REF(StorageValue, Value, StorageValueObj);
};

} // namespace vm
Expand Down
4 changes: 2 additions & 2 deletions include/raf/vm/vm.h
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ class VMContextObj : public ValueObj {
}
static constexpr const uint32_t _type_index = ir::TypeIndex::kDynamic;
static constexpr const char* _type_key = "raf.vm.VMContext";
RAF_BASE_OBJECT(VMContextObj, ValueObj);
RAF_FINAL_OBJECT(VMContextObj, ValueObj);
};

/*!
Expand Down Expand Up @@ -207,7 +207,7 @@ class VMContext : public Value {
*/
inline Index PopFrame();

RAF_OBJECT_REF(VMContext, Value, VMContextObj);
RAF_MUTABLE_OBJECT_REF(VMContext, Value, VMContextObj);
};

using OpEnvCache = MetaCache<OpEnvPtr>;
Expand Down
Loading

0 comments on commit eefd9eb

Please sign in to comment.