Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CINN] Refactor pass api of group fusion in CINN #55090

Merged
merged 14 commits into from
Jul 13, 2023
24 changes: 24 additions & 0 deletions cmake/cinn/core.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -433,6 +433,28 @@ function(download_and_uncompress INSTALL_DIR URL FILENAME)
INSTALL_COMMAND "")
endfunction()

set(fusion_pass_file
${CMAKE_CURRENT_BINARY_DIR}/paddle/cinn/hlir/pass/use_general_pass.h
CACHE INTERNAL "use_general_pass.h file")
file(
WRITE ${fusion_pass_file}
"#include \"paddle/cinn/common/macros.h\" // Generated by the paddle/cinn/hlir/pass/CMakeLists.txt. DO NOT EDIT!\n\n"
)

function(find_fusion_pass_register FILENAME ADD_PATH PATTERN)
# set op_name to OUTPUT
file(READ ${FILENAME} CONTENT)
string(REGEX MATCHALL "${PATTERN}\\([a-zA-Z0-9_]*," fusion_pass_patterns
"${CONTENT}")
if(NOT fusion_pass_patterns STREQUAL "")
foreach(pass_pattern ${fusion_pass_patterns})
string(REPLACE "${PATTERN}(" "" pass_pattern "${pass_pattern}")
string(REPLACE "," "" pass_pattern "${pass_pattern}")
file(APPEND ${ADD_PATH} "USE_FUSION_PASS(${pass_pattern});\n")
endforeach()
endif()
endfunction()

function(gather_srcs SRC_GROUP)
set(options)
set(oneValueArgs)
Expand All @@ -442,6 +464,8 @@ function(gather_srcs SRC_GROUP)
set(${SRC_GROUP}
"${${SRC_GROUP}};${CMAKE_CURRENT_SOURCE_DIR}/${cpp}"
CACHE INTERNAL "")
find_fusion_pass_register("${CMAKE_CURRENT_SOURCE_DIR}/${cpp}"
${fusion_pass_file} "CINN_REGISTER_FUSION_PASS")
endforeach()
endfunction()

Expand Down
1 change: 1 addition & 0 deletions paddle/cinn/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ if(WITH_TESTING)
cinn_cc_library(cinn_gtest_main SRCS gtest_main.cc DEPS gtest gflags)
endif()

add_subdirectory(api)
add_subdirectory(auto_schedule)
add_subdirectory(common)
add_subdirectory(utils)
Expand Down
5 changes: 5 additions & 0 deletions paddle/cinn/api/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
core_gather_headers()

gather_srcs(cinnapi_src SRCS op_node.cc tensor_node.cc)

message(STATUS "srcs: ${cinnapi_src}")
45 changes: 45 additions & 0 deletions paddle/cinn/api/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
The classes in this directory are the interface of group fusion pass, you can use these apis to build the stragey for group fusion.


The Class and APIs are following:

`OpGroup` : A set of op nodes, which will pass to cinn backend for generating kernel code. Two groups can fuse togather according to the rule of merging written in the passes.

`OpNode` : Map the op in the program.

`TensorNode` : Map the tensor in the program.

`Shape` : The shape infomation of tensor

`FusePassCtx` : The context is the parameter for the pass, it hold the data all you need in the pass.

`FuseHelper` : We provide some util methods such as `DetectCycleIfFuse` in fuse_helper to simplify development of pass.

| Class | method | description |
| :--: | :--: | :--: |
| OpGroup | kind()| Get the Kind of group |
| | producers()| Get producer groups of current group |
| | consumers() | Get consumer groups of current group |
| | WalkOpNodes(const std::function<void(const OpNode&)>& VisitOpNode) | Visit the op_nodes in the group and execute the VisitOpNode function for each OpNode |
| | | |
| OpNode | kind() | Get the Kind of op_node |
| | inputs() | Get input tensors of op_node |
| | outputs() | Get output tensors of op_node |
| | GetAttr(const std::string& attr_name) | Get attribute of op_node by attr name |
| | | |
| TensorNode | shape() | Get shape of tensor |
| | producer() | Get the producer op_node of tensor |
| | consumers() | Get the consumer op_nodes of tensor |
| | | |
| Shape | numel() | Get total number of elements in the shape |
| | other methods are same with std::vector<int64_t> | |
| | | |
| LightwareFusePassCtx | PickOpGroup() | Get the current group in the pass context |
| | void EnableFuse(const OpGroup& first, const OpGroup& second) | Mark the two groups which can fuse togather |
| | fuse_helper() | Get the fuse_helper provided by pass context |
| | | |
| InputFusePassCtx | PickConsumersWithSameInputs() | Get all consumer groups for input tensors of graph |
| | void EnableFuse(const OpGroup& first, const OpGroup& second) | Mark the two groups which can fuse togather |
| | fuse_helper() | Get the fuse_helper provided by pass context |
| | | |
| FuseHelper | DetectCycleIfFuse(const OpGroup& first, const OpGroup& second) | Whether there is cycle in graph after fusing two groups |
202 changes: 202 additions & 0 deletions paddle/cinn/api/op_group.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
// Copyright (c) 2023 CINN Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include <memory>

#include "paddle/cinn/api/op_node.h"

#include "paddle/cinn/hlir/framework/graph.h"
#include "paddle/cinn/hlir/pass/fusion_helper_base.h"

namespace cinn {
namespace api {

class OpGroup {
public:
explicit OpGroup(const std::shared_ptr<hlir::framework::Graph::Group>& group)
: group_(group) {}

OpGroup(const OpGroup& other) = default;

using Comparator = hlir::framework::Graph::Group::SharedGroupComparator;
using Hasher = hlir::framework::Graph::Group::SharedGroupHasher;

class OpGroupListIterator {
public:
OpGroupListIterator(
std::unordered_set<std::shared_ptr<hlir::framework::Graph::Group>,
Hasher,
Comparator>::const_iterator it)
: iter_(it) {}

OpGroupListIterator& operator++() {
++iter_;
return *this;
}

OpGroupListIterator operator++(int) {
OpGroupListIterator tmp = *this;
++iter_;
return tmp;
}

bool operator==(const OpGroupListIterator& other) const {
return iter_ == other.iter_;
}

bool operator!=(const OpGroupListIterator& other) const {
return !(*this == other);
}

OpGroup operator*() const { return OpGroup(*iter_); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

迭代器应该也支持->操作吧?要不要加上?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

->需要返回对象指针,但由于这里的迭代器都是返回临时创建的封装对象,如果返回对象指针的话对象的析构会是个问题。
目前迭代器主要是用来支持对返回的容器对象做遍历,缺少->接口暂时不影响该功能的使用,后续数据结构升级时可以再加上


private:
std::unordered_set<std::shared_ptr<hlir::framework::Graph::Group>,
Hasher,
Comparator>::const_iterator iter_;
};

class ProducerOpGroupListView {
public:
ProducerOpGroupListView(
const std::weak_ptr<hlir::framework::Graph::Group>& group)
: group_(group) {}

ProducerOpGroupListView(const ProducerOpGroupListView& other) = delete;
ProducerOpGroupListView(ProducerOpGroupListView&& other) = delete;

ProducerOpGroupListView& operator=(const ProducerOpGroupListView& other) =
delete;

using const_iterator = OpGroupListIterator;

size_t size() const {
CHECK(group_.lock());
return group_.lock()->producer_groups().size();
}

const_iterator begin() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

和STL保持一致,命名为cbegin更好吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

STL中目前也可以通过begin函数的const标记来区分迭代器的类型

CHECK(group_.lock());
return const_iterator(group_.lock()->producer_groups().begin());
}

const_iterator end() const {
CHECK(group_.lock());
return const_iterator(group_.lock()->producer_groups().end());
}

private:
const std::weak_ptr<hlir::framework::Graph::Group> group_;
};

class ConsumerOpGroupListView {
public:
ConsumerOpGroupListView(
const std::weak_ptr<hlir::framework::Graph::Group>& group)
: group_(group) {}

ConsumerOpGroupListView(const ConsumerOpGroupListView& other) = delete;
ConsumerOpGroupListView(ConsumerOpGroupListView&& other) = delete;

ConsumerOpGroupListView& operator=(const ConsumerOpGroupListView& other) =
delete;

using const_iterator = OpGroupListIterator;

size_t size() const {
CHECK(group_.lock());
return group_.lock()->consumer_groups().size();
}

const_iterator begin() const {
CHECK(group_.lock());
return const_iterator(group_.lock()->consumer_groups().begin());
}

const_iterator end() const {
CHECK(group_.lock());
return const_iterator(group_.lock()->consumer_groups().end());
}

private:
const std::weak_ptr<hlir::framework::Graph::Group> group_;
};

const std::string& group_id() const { return group_.lock()->group_id; }

hlir::framework::OpPatternKind kind() const { return group_.lock()->kind(); }

// The WalkOpNodes function is used to traverse the op_nodes in the group and
// execute the VisitOpNode function for each OpNode. This function is
// equivalent to for loop for op_nodes in graph.
//
// In order to avoid unnecessary memory copies, we use WalkOpNodes function
// instead of providing a function to get all op_nodes directly.
//
// Example: Get the all Reduction op_nodes in the group.
// OpGroup group = ...;
// std::set<api::OpNode> reduce_ op_set;
// // The lambda funtion of VisitOpNode to get reduction op_nodes.
// auto get_reduce_op = [&reduce_op_set](const api::OpNode& op){
// if (op.kind() == OpPatternKind::kReduction) {
// reduce_op_set.insert(op);
// }
// };
// group.WalkOpNodes(get_reduce_op);
void WalkOpNodes(
const std::function<void(const OpNode&)>& VisitOpNode) const {
group_.lock()->WalkNodes([&](const hlir::framework::Node* node) {
VisitOpNode(OpNode(node, group_.lock()->graph_));
});
}

ProducerOpGroupListView producers() const {
return ProducerOpGroupListView(group_);
}

ConsumerOpGroupListView consumers() const {
return ConsumerOpGroupListView(group_);
}

std::shared_ptr<hlir::framework::Graph::Group> GetGroup() const {
return group_.lock();
}

bool operator==(const OpGroup& other) const {
return group_.lock().get() == other.group_.lock().get();
}

bool operator<(const OpGroup& other) const {
return group_.lock().get() < other.group_.lock().get();
}

private:
const std::weak_ptr<hlir::framework::Graph::Group> group_;
};

} // namespace api
} // namespace cinn

namespace std {

template <>
struct hash<cinn::api::OpGroup> {
size_t operator()(const cinn::api::OpGroup& obj) const {
return std::hash<size_t>()(reinterpret_cast<size_t>(obj.GetGroup().get()));
}
};

} // namespace std
35 changes: 35 additions & 0 deletions paddle/cinn/api/op_node.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
// Copyright (c) 2023 CINN Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/cinn/api/op_node.h"

namespace cinn {
namespace api {

TensorNode OpNode::TensorListIterator::operator*() const {
return TensorNode(get_tensor_from_edge_(*iter_), graph_);
}

TensorNode OpNode::InputTensorListView::operator[](size_t index) const {
return TensorNode(
edges_[index]->source()->safe_as<hlir::framework::NodeData>(), graph_);
}

TensorNode OpNode::OutputTensorListView::operator[](size_t index) const {
return TensorNode(edges_[index]->sink()->safe_as<hlir::framework::NodeData>(),
graph_);
}

} // namespace api
} // namespace cinn
Loading