Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Cmake 治理] Move DDim etc. to common #59105

Merged
merged 77 commits into from
Dec 4, 2023
Merged
Show file tree
Hide file tree
Changes from 73 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
c730efe
check
zhangbopd Nov 20, 2023
f27872f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 20, 2023
6d454ca
fix conflict
zhangbopd Nov 20, 2023
f5f837f
exception
zhangbopd Nov 20, 2023
2e50483
bugfix
zhangbopd Nov 20, 2023
1f6daee
kunlun ci
zhangbopd Nov 20, 2023
4c6a2fd
WIN_CI
zhangbopd Nov 20, 2023
78b410a
WinCI
zhangbopd Nov 20, 2023
a1aa044
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 20, 2023
03d6a21
kunlunCI
zhangbopd Nov 20, 2023
8ff48c4
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 20, 2023
4d344e0
ci
zhangbopd Nov 20, 2023
aca1a44
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 20, 2023
34c44d7
bugfix
zhangbopd Nov 20, 2023
a1c1963
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 20, 2023
ad05f14
setup.py
zhangbopd Nov 21, 2023
0cee885
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 24, 2023
dbd548c
bug_fix
zhangbopd Nov 25, 2023
50cb6c2
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 25, 2023
2b21609
ci_fix
zhangbopd Nov 25, 2023
cbf9b3f
TEST_API friend
zhangbopd Nov 25, 2023
e10a198
hash
zhangbopd Nov 25, 2023
84afd25
auto_code_gen_WIN_CI
zhangbopd Nov 27, 2023
a3efe95
inference_CI
zhangbopd Nov 27, 2023
07cbd5f
use_common_enforce
zhangbopd Nov 27, 2023
4816427
delete pir_enforce
zhangbopd Nov 27, 2023
05ad23b
delete_error
zhangbopd Nov 27, 2023
78b5a1f
change_cmake
zhangbopd Nov 28, 2023
d327275
bug_fix
zhangbopd Nov 28, 2023
1b9f5b8
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 28, 2023
9c3075b
conflict
zhangbopd Nov 28, 2023
521e31d
temp
zhangbopd Nov 28, 2023
1942748
cmake
zhangbopd Nov 28, 2023
ae2a95f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 28, 2023
cc4fb76
using_func_in_fun
zhangbopd Nov 28, 2023
b4c5dce
win_CI
zhangbopd Nov 29, 2023
2af8498
merge
zhangbopd Nov 29, 2023
8e43d56
merge
zhangbopd Nov 29, 2023
2b5eebe
mac_CI
zhangbopd Nov 29, 2023
58fd9f7
inference_copy
zhangbopd Nov 29, 2023
3adfd7d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 29, 2023
6ac3b96
bug_fix
zhangbopd Nov 29, 2023
4e46d53
delete_pybind_common
zhangbopd Nov 29, 2023
062a81e
bugfix
zhangbopd Nov 30, 2023
b08e417
paddle_test
zhangbopd Nov 30, 2023
0c58fc4
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 30, 2023
4685472
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 30, 2023
ec8d3da
split ddim constructor
zhangbopd Nov 30, 2023
b454b37
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Nov 30, 2023
1ea307f
cc_test
zhangbopd Nov 30, 2023
41aba9d
infer
zhangbopd Nov 30, 2023
3a9d3b2
bug_fix
zhangbopd Nov 30, 2023
977e410
bug_fix
zhangbopd Nov 30, 2023
45fb83b
mac OS CI
zhangbopd Dec 1, 2023
661f4c0
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 1, 2023
2cad7b5
infer
zhangbopd Dec 1, 2023
843e646
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 1, 2023
9562510
use cinn::common
zhangbopd Dec 1, 2023
b1131f5
copy_infer
zhangbopd Dec 1, 2023
cb36846
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 1, 2023
816a63f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 1, 2023
a44a9a5
[PIR] fix ci conflict, test=document_fix.
winter-wang Dec 1, 2023
0a32645
Merge commit 'refs/pull/59595/head' of https://github.com/PaddlePaddl…
zhangbopd Dec 1, 2023
075f909
delete_layer_test_new
zhangbopd Dec 1, 2023
9c0a42f
bug_fix
zhangbopd Dec 1, 2023
f1cc90a
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 1, 2023
cb550e4
infer
zhangbopd Dec 2, 2023
0943e6f
fix inference bug
zhangbopd Dec 3, 2023
8135184
fix bug
zhangbopd Dec 3, 2023
2c857a9
fix bug
zhangbopd Dec 3, 2023
82422fd
fix bug
zhangbopd Dec 3, 2023
c3c03c0
fix bug
zhangbopd Dec 3, 2023
ce6524d
fix bug
zhangbopd Dec 4, 2023
a953360
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 4, 2023
9b2e729
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 4, 2023
68a52a1
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
zhangbopd Dec 4, 2023
ce6c8a1
conflict
zhangbopd Dec 4, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion cmake/generic.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -622,7 +622,7 @@ function(paddle_test_build TARGET_NAME)
if(APPLE)
target_link_libraries(
${TARGET_NAME}
"-Wl,-rpath,$<TARGET_FILE_DIR:${paddle_lib}> -Wl,-rpath,$<TARGET_FILE_DIR:phi> -Wl,-rpath,$<TARGET_FILE_DIR:pir>"
"-Wl,-rpath,$<TARGET_FILE_DIR:${paddle_lib}> -Wl,-rpath,$<TARGET_FILE_DIR:phi> -Wl,-rpath,$<TARGET_FILE_DIR:pir> -Wl,-rpath,$<TARGET_FILE_DIR:common>"
)
endif()
common_link(${TARGET_NAME})
Expand Down
44 changes: 15 additions & 29 deletions cmake/inference_lib.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,10 @@ copy(
include_directories(${CMAKE_BINARY_DIR}/../paddle/fluid/framework/io)

# copy api headers for phi & custom op
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/common/*.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/common/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/phi/api/ext/*.h
Expand All @@ -304,8 +308,17 @@ copy(
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/phi/common/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/phi/core/macros.h
SRCS ${PADDLE_SOURCE_DIR}/paddle/phi/core/enforce.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/phi/core/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/string/*.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/string/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/string/tinyformat/tinyformat.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/string/tinyformat/
)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/phi/core/visit_type.h
Expand All @@ -320,40 +333,13 @@ copy(
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/phi/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/any.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/optional.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/none.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/flat_hash_map.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/flags.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/test_macros.h
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/*.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/extension.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/)

if(NOT WITH_GFLAGS)
copy(
inference_lib_dist
SRCS ${PADDLE_SOURCE_DIR}/paddle/utils/flags_native.h
DSTS ${PADDLE_INFERENCE_INSTALL_DIR}/paddle/include/paddle/utils/)
endif()

# the include path of phi needs to be changed to adapt to inference api path
add_custom_command(
TARGET inference_lib_dist
Expand Down
18 changes: 10 additions & 8 deletions paddle/cinn/api/tensor_node.h
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,10 @@ class TensorNode final {

class ConsumerOpListView {
public:
ConsumerOpListView(const std::set<common::Shared<common::GraphEdge>,
common::GraphEdgeCompare>& edges,
const hlir::framework::Graph* graph)
ConsumerOpListView(
const std::set<cinn::common::Shared<cinn::common::GraphEdge>,
cinn::common::GraphEdgeCompare>& edges,
const hlir::framework::Graph* graph)
: edges_(edges), graph_(graph) {}

ConsumerOpListView(const ConsumerOpListView& other) = delete;
Expand All @@ -64,8 +65,8 @@ class TensorNode final {

class Iterator {
public:
Iterator(std::set<common::Shared<common::GraphEdge>,
common::GraphEdgeCompare>::const_iterator it,
Iterator(std::set<cinn::common::Shared<cinn::common::GraphEdge>,
cinn::common::GraphEdgeCompare>::const_iterator it,
const hlir::framework::Graph* graph)
: iter_(it), graph_(graph) {}

Expand All @@ -89,8 +90,8 @@ class TensorNode final {
OpNode operator*() const;

private:
std::set<common::Shared<common::GraphEdge>,
common::GraphEdgeCompare>::const_iterator iter_;
std::set<cinn::common::Shared<cinn::common::GraphEdge>,
cinn::common::GraphEdgeCompare>::const_iterator iter_;
const hlir::framework::Graph* graph_;
};

Expand All @@ -101,7 +102,8 @@ class TensorNode final {
Iterator end() const { return Iterator(this->edges_.end(), graph_); }

private:
const std::set<Shared<common::GraphEdge>, common::GraphEdgeCompare>& edges_;
const std::set<Shared<cinn::common::GraphEdge>,
cinn::common::GraphEdgeCompare>& edges_;
const hlir::framework::Graph* graph_;
};

Expand Down
6 changes: 3 additions & 3 deletions paddle/cinn/ast_gen_ius/ast_gen.cc
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ ir::Expr AstGen::Build(const ir::Tensor& tensor, TensorGroup* tensor_group) {
std::vector<ir::Expr> iter_values;
// reduce body and reduce init schedule block should have different objects
// for same axis so we re-create objects
std::vector<Var> axis_vars = common::GenDefaultAxis(axis_len);
std::vector<Var> axis_vars = cinn::common::GenDefaultAxis(axis_len);
for (int i = 0; i < shape.size(); ++i) {
block_vars.push_back(Var(Expr(0),
shape[i],
Expand Down Expand Up @@ -118,7 +118,7 @@ ir::Expr AstGen::Build(const ir::Tensor& tensor, TensorGroup* tensor_group) {
std::vector<ir::Expr> reduce_iter_values;
// reduce body and reduce init schedule block should have different objects
// for same axis so we re-create objects
std::vector<Var> reduce_axis_vars = common::GenDefaultAxis(axis_len);
std::vector<Var> reduce_axis_vars = cinn::common::GenDefaultAxis(axis_len);
for (int i = 0; i < shape.size(); ++i) {
reduce_block_vars.push_back(Var(Expr(0),
shape[i],
Expand Down Expand Up @@ -182,7 +182,7 @@ ir::Expr AstGen::Build(const ir::Tensor& tensor, TensorGroup* tensor_group) {
// create schedule block itervars, i0,i1...
std::vector<ir::Var> block_vars;
std::vector<ir::Expr> iter_values;
std::vector<Var> axis_vars = common::GenDefaultAxis(axis_len);
std::vector<Var> axis_vars = cinn::common::GenDefaultAxis(axis_len);
for (int i = 0; i < shape.size(); ++i) {
block_vars.push_back(Var(
Expr(0), shape[i], cinn::UniqName("i" + std::to_string(i)), false));
Expand Down
4 changes: 2 additions & 2 deletions paddle/cinn/auto_schedule/analysis/analyze_ir.cc
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ bool NeedsMultiLevelTiling(const ir::ScheduleBlockRealize& sche_block_realize) {
return total_unused_iter_vars >= 1;
}

ir::LoweredFunc UpdateFuncWithNewBody(const common::Target& target,
ir::LoweredFunc UpdateFuncWithNewBody(const cinn::common::Target& target,
const ir::LoweredFunc& old_func,
ir::Expr& body) { // NOLINT
ir::ModuleExpr mod_expr(std::vector<ir::Expr>({body}));
Expand Down Expand Up @@ -179,7 +179,7 @@ ir::LoweredFunc UpdateFuncWithNewBody(const common::Target& target,
ir::LoweredFunc new_func = ir::_LoweredFunc_::Make(
old_func->name, old_func->args, updated_body, new_temp_bufs);
#ifdef CINN_WITH_CUDA
if (target == common::DefaultNVGPUTarget()) {
if (target == cinn::common::DefaultNVGPUTarget()) {
new_func->PrepareCudaAxisInfoFromBody();
}
#endif
Expand Down
2 changes: 1 addition & 1 deletion paddle/cinn/auto_schedule/analysis/analyze_ir.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ bool NeedsMultiLevelTiling(const ir::ScheduleBlockRealize& sche_block_realize);
/**
* Update a LoweredFunc by regenerating related fields with a new function body
*/
ir::LoweredFunc UpdateFuncWithNewBody(const common::Target& target,
ir::LoweredFunc UpdateFuncWithNewBody(const cinn::common::Target& target,
const ir::LoweredFunc& old_func,
ir::Expr& body); // NOLINT

Expand Down
12 changes: 6 additions & 6 deletions paddle/cinn/auto_schedule/analysis/analyze_ir_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@ namespace auto_schedule {
TEST(AnalyzeIr, AnalyzeScheduleBlockReadWriteBuffer_SimpleAssign) {
Context::Global().ResetNameId();
#ifdef CINN_WITH_CUDA
Target target = common::DefaultNVGPUTarget();
Target target = cinn::common::DefaultNVGPUTarget();
#else
Target target = common::DefaultHostTarget();
Target target = cinn::common::DefaultHostTarget();
#endif

ir::Expr M(32);
Expand Down Expand Up @@ -102,9 +102,9 @@ TEST(AnalyzeIr, AnalyzeScheduleBlockReadWriteBuffer_SimpleAssign) {
TEST(AnalyzeIr, AnalyzeScheduleBlockReadWriteBuffer_AddDiffShape) {
Context::Global().ResetNameId();
#ifdef CINN_WITH_CUDA
Target target = common::DefaultNVGPUTarget();
Target target = cinn::common::DefaultNVGPUTarget();
#else
Target target = common::DefaultHostTarget();
Target target = cinn::common::DefaultHostTarget();
#endif

ir::Expr M(32);
Expand Down Expand Up @@ -158,9 +158,9 @@ TEST(AnalyzeIr, AnalyzeScheduleBlockReadWriteBuffer_AddDiffShape) {
TEST(AnalyzeIr, ContainsNodeType) {
Context::Global().ResetNameId();
#ifdef CINN_WITH_CUDA
Target target = common::DefaultNVGPUTarget();
Target target = cinn::common::DefaultNVGPUTarget();
#else
Target target = common::DefaultHostTarget();
Target target = cinn::common::DefaultHostTarget();
#endif

ir::Expr M(32);
Expand Down
4 changes: 2 additions & 2 deletions paddle/cinn/auto_schedule/auto_tuner.cc
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
namespace cinn {
namespace auto_schedule {

AutoTuner::AutoTuner(const common::Target& target,
AutoTuner::AutoTuner(const cinn::common::Target& target,
hlir::framework::Graph* graph)
: target_(target), graph_(graph) {}

Expand All @@ -58,7 +58,7 @@ void AutoTuner::Initialize(const Config& config,
tasks_ = task_creator.CreateTuneTaskOpLevel(graph_);

const auto& dtype_dict =
graph_->GetAttrs<absl::flat_hash_map<std::string, common::Type>>(
graph_->GetAttrs<absl::flat_hash_map<std::string, cinn::common::Type>>(
"inferdtype");
const auto& shape_dict = graph_->GetAttrs<
absl::flat_hash_map<std::string, hlir::framework::shape_t>>("infershape");
Expand Down
4 changes: 2 additions & 2 deletions paddle/cinn/auto_schedule/auto_tuner.h
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ class AutoTuner {
DatabaseConfig database_config;
};

AutoTuner(const common::Target& target, hlir::framework::Graph* graph);
AutoTuner(const cinn::common::Target& target, hlir::framework::Graph* graph);

// Initialize tuner with specific config and auxiliary objects.
void Initialize(const Config& config,
Expand All @@ -56,7 +56,7 @@ class AutoTuner {
TuningResult Tune(const TuningOptions& options);

private:
const common::Target& target_;
const cinn::common::Target& target_;
hlir::framework::Graph* graph_;
std::unique_ptr<hlir::framework::OpLowerer<GroupPtr>> op_lowerer_;

Expand Down
4 changes: 2 additions & 2 deletions paddle/cinn/auto_schedule/auto_tuner_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ using ::cinn::hlir::framework::Scope;
class TestAutoTuner : public ::testing::Test {
public:
#ifdef CINN_WITH_CUDA
Target target = common::DefaultNVGPUTarget();
Target target = cinn::common::DefaultNVGPUTarget();
#else
Target target = common::DefaultHostTarget();
Target target = cinn::common::DefaultHostTarget();
#endif

std::shared_ptr<Graph> graph;
Expand Down
6 changes: 3 additions & 3 deletions paddle/cinn/auto_schedule/cost_model/expr_cost_model.cc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ namespace cinn {
namespace auto_schedule {

float ExprCostModel::Predict(const ir::ModuleExpr& sample,
const common::Target& target) const {
const cinn::common::Target& target) const {
if (trained_times_.load() == 0) {
return SearchState::NOT_INIT_COST;
}
Expand All @@ -42,7 +42,7 @@ float ExprCostModel::Predict(const ir::ModuleExpr& sample,

void ExprCostModel::Train(const std::vector<const ir::ModuleExpr*>& samples,
const std::vector<float>& labels,
const common::Target& target) {
const cinn::common::Target& target) {
trained_times_.store(1);
size_t total_size = samples.size();
CHECK_EQ(total_size, labels.size())
Expand All @@ -60,7 +60,7 @@ void ExprCostModel::Train(const std::vector<const ir::ModuleExpr*>& samples,

void ExprCostModel::Update(const std::vector<const ir::ModuleExpr*>& samples,
const std::vector<float>& labels,
const common::Target& target) {
const cinn::common::Target& target) {
++trained_times_;
size_t total_size = samples.size();
CHECK_EQ(total_size, labels.size())
Expand Down
6 changes: 3 additions & 3 deletions paddle/cinn/auto_schedule/cost_model/expr_cost_model.h
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ namespace auto_schedule {
class ExprCostModel : public XgbCostModel {
public:
virtual float Predict(const ir::ModuleExpr& sample,
const common::Target& target) const;
const cinn::common::Target& target) const;
void Train(const std::vector<const ir::ModuleExpr*>& samples,
const std::vector<float>& labels,
const common::Target& target);
const cinn::common::Target& target);
void Update(const std::vector<const ir::ModuleExpr*>& samples,
const std::vector<float>& labels,
const common::Target& target);
const cinn::common::Target& target);

private:
std::atomic<int> trained_times_{0};
Expand Down
6 changes: 3 additions & 3 deletions paddle/cinn/auto_schedule/cost_model/feature.cc
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,12 @@ namespace cinn {
namespace auto_schedule {

Feature::Feature()
: target_(common::UnkTarget()),
: target_(cinn::common::UnkTarget()),
stack_encoded_feature_(1), // initialize a LoopBlockFeature as root block
current_loop_block_index_(0),
parent_indices_(1, -1) {}

Feature::Feature(const common::Target& target)
Feature::Feature(const cinn::common::Target& target)
: target_(target),
stack_encoded_feature_(1), // initialize a LoopBlockFeature as root block
current_loop_block_index_(0),
Expand All @@ -52,7 +52,7 @@ std::vector<float> Feature::ToFixedSizeVector() {
std::vector<float> ret(LoopBlockFeature::kTotalSize + 1,
0); // LoopBlockFeature::kTotalSize plus 1 for target

if (target_ == common::DefaultNVGPUTarget()) {
if (target_ == cinn::common::DefaultNVGPUTarget()) {
ret[0] = 1;
} // else 0 for other cases

Expand Down
4 changes: 2 additions & 2 deletions paddle/cinn/auto_schedule/cost_model/feature.h
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ class Feature {
public:
Feature();

explicit Feature(const common::Target& target);
explicit Feature(const cinn::common::Target& target);

// Convert the various-length loop block features to fixed-size vector
std::vector<float> ToFixedSizeVector();
Expand Down Expand Up @@ -182,7 +182,7 @@ class Feature {
int current_loop_block_index_;
std::vector<int> parent_indices_;

common::Target target_;
cinn::common::Target target_;
};

} // namespace auto_schedule
Expand Down
16 changes: 9 additions & 7 deletions paddle/cinn/auto_schedule/cost_model/feature_extractor.cc
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ void FeatureExtractor::Visit(const Expr *x) {
}

Feature FeatureExtractor::Extract(const ir::ModuleExpr &mod_expr,
const common::Target &target) {
const cinn::common::Target &target) {
feature_ = Feature(target);
for (const ir::Expr &e : mod_expr.GetExprs()) {
Visit(&e);
Expand Down Expand Up @@ -91,8 +91,9 @@ NotVisitExprFields(_Tensor_)

#define VisitForDtypePattern(NodeType, member) \
void FeatureExtractor::Visit(const NodeType *x) { \
if (x->type() == common::F32() || x->type() == common::F16() || \
x->type() == common::F64()) { \
if (x->type() == cinn::common::F32() || \
x->type() == cinn::common::F16() || \
x->type() == cinn::common::F64()) { \
feature_.CurrentLoopBlock().float_##member += x->type().lanes(); \
} else { \
feature_.CurrentLoopBlock().int_##member += x->type().lanes(); \
Expand Down Expand Up @@ -125,8 +126,9 @@ VisitForDtypePattern(Let, other_call);

#define VisitForMultiOperandsDtypePattern(NodeType, member) \
void FeatureExtractor::Visit(const NodeType *x) { \
if (x->type() == common::F32() || x->type() == common::F16() || \
x->type() == common::F64()) { \
if (x->type() == cinn::common::F32() || \
x->type() == cinn::common::F16() || \
x->type() == cinn::common::F64()) { \
feature_.CurrentLoopBlock().float_##member += \
(x->operands().size() - 1); \
} else { \
Expand Down Expand Up @@ -231,8 +233,8 @@ void FeatureExtractor::Visit(const PolyFor *x) {
/* Visit for Reduce and Broadcast */

void FeatureExtractor::Visit(const Reduce *x) {
if (x->type() == common::F32() || x->type() == common::F16() ||
x->type() == common::F64()) {
if (x->type() == cinn::common::F32() || x->type() == cinn::common::F16() ||
x->type() == cinn::common::F64()) {
switch (x->reduce_type) {
case Reduce::ReduceType::kSum:
feature_.CurrentLoopBlock().float_reduce_sum_or_sub +=
Expand Down
Loading