Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IR]add kernel dialect #54428

Merged
merged 4 commits into from
Jun 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_dialect.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pd_kernel_dialect 要和已经存在 pd_dialect 都铺在一个目录下么?是否需要分成两个目录?

//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/fluid/ir/dialect/pd_kernel_dialect.h"
#include "paddle/fluid/ir/dialect/pd_attribute.h"
#include "paddle/fluid/ir/dialect/pd_kernel_op.h"
// NOTE(zhangbo9674): File pd_op.h is generated by op_gen.py, see details in
// paddle/fluid/ir/dialect/CMakeLists.txt.
#include "paddle/fluid/framework/convert_utils.h"
#include "paddle/fluid/framework/data_type.h"
#include "paddle/fluid/ir/dialect/pd_kernel_type.h"
#include "paddle/fluid/ir/dialect/pd_kernel_type_storage.h"
#include "paddle/fluid/ir/dialect/pd_op.h"
#include "paddle/fluid/ir/dialect/utils.h"
#include "paddle/ir/core/dialect_interface.h"
#include "paddle/phi/core/dense_tensor.h"

namespace paddle {
namespace dialect {

PaddleKernelDialect::PaddleKernelDialect(ir::IrContext *context)
: ir::Dialect(name(), context, ir::TypeId::get<PaddleKernelDialect>()) {
initialize();
}

void PaddleKernelDialect::initialize() {
RegisterTypes<paddle::dialect::AllocatedDenseTensorType>();
RegisterOps<dialect::PhiKernelOp>();

// RegisterAttributes<paddle::dialect::IntArrayAttribute,
// paddle::dialect::DataTypeAttribute,
// paddle::dialect::PlaceAttribute,
// paddle::dialect::DataLayoutAttribute>();
}

void PaddleKernelDialect::PrintType(ir::Type type, std::ostream &os) {
AllocatedDenseTensorType tensor_type =
type.dyn_cast<AllocatedDenseTensorType>();

os << phi::AllocationTypeStr(tensor_type.place().GetType()) << "_";
os << "tensor<";
for (auto d : phi::vectorize(tensor_type.dims())) {
os << d;
os << "x";
}
tensor_type.dtype().Print(os);
os << ">";
}

} // namespace dialect
} // namespace paddle
37 changes: 37 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_dialect.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/fluid/framework/variable.h"
#include "paddle/ir/core/dialect.h"
#include "paddle/ir/core/parameter.h"

namespace paddle {
namespace dialect {

class PaddleKernelDialect : public ir::Dialect {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我看下面有些数据结构命名是以Phi开头,有些是以Paddle开头,我们的区分原则是什么,还是说「统一」成一个词也没有问题?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

phi 自己是一个不在paddle下的namespace; 这个确实可以统一下,都用同一个的单词

public:
explicit PaddleKernelDialect(ir::IrContext* context);

static const char* name() { return "pd_kernel"; }

void PrintType(ir::Type type, std::ostream& os);

private:
void initialize();
};

} // namespace dialect
} // namespace paddle
35 changes: 35 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_op.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/fluid/ir/dialect/pd_kernel_op.h"

namespace paddle {
namespace dialect {

const char *PhiKernelOp::attributes_name[attributes_num] = {
"base_op", "infermeta_fn", "kernel_fn"};

void PhiKernelOp::Verify(const std::vector<ir::OpResult> &inputs,
const std::vector<ir::Type> &outputs,
const ir::AttributeMap &attributes) {
VLOG(4) << "Verifying inputs, outputs and attributes for: SetParameterOp.";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
VLOG(4) << "Verifying inputs, outputs and attributes for: SetParameterOp.";
VLOG(4) << "Verifying inputs, outputs and attributes for: PhiKernelOp.";

// Verify inputs type:

// Verify if attributes contain attribute name in attributes_name:
// if (!attributes.at("parameter_name").isa<StrAttribute>()) {
// throw("Type of attribute: parameter_name is not right.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是默认不进行任何Verify 操作么,期望行为就是这样的么?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是需要在开发中,往内部添加数据,然后统一完善verify

}

} // namespace dialect
} // namespace paddle
35 changes: 35 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_op.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/ir/core/builder.h"
#include "paddle/ir/core/op_base.h"

namespace paddle {
namespace dialect {

class PhiKernelOp : public ir::Op<PhiKernelOp> {
public:
using Op::Op;
static const char *name() { return "phi.kernel"; }
static constexpr uint32_t attributes_num = 3;
static const char *attributes_name[attributes_num];
static void Verify(const std::vector<ir::OpResult> &inputs,
const std::vector<ir::Type> &outputs,
const ir::AttributeMap &attributes);
};

} // namespace dialect
} // namespace paddle
45 changes: 45 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_type.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/fluid/ir/dialect/pd_kernel_type.h"

namespace paddle {
namespace dialect {

const phi::Place& AllocatedDenseTensorType::place() const {
return storage()->place_;
}

const ir::Type& AllocatedDenseTensorType::dtype() const {
return storage()->dense_tensor_type_.dtype();
}

const phi::DDim& AllocatedDenseTensorType::dims() const {
return storage()->dense_tensor_type_.dims();
}

const phi::DataLayout& AllocatedDenseTensorType::data_layout() const {
return storage()->dense_tensor_type_.data_layout();
}

const phi::LoD& AllocatedDenseTensorType::lod() const {
return storage()->dense_tensor_type_.lod();
}

const size_t& AllocatedDenseTensorType::offset() const {
return storage()->dense_tensor_type_.offset();
}

} // namespace dialect
} // namespace paddle
68 changes: 68 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_type.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include "paddle/fluid/ir/dialect/pd_kernel_type_storage.h"
#include "paddle/fluid/ir/dialect/pd_type.h"
#include "paddle/ir/core/type.h"

namespace paddle {
namespace dialect {
///
/// \brief Define built-in parametric types.
///
class AllocatedDenseTensorType : public ir::Type {
public:
using Type::Type;

DECLARE_TYPE_UTILITY_FUNCTOR(AllocatedDenseTensorType,
AllocatedDenseTensorTypeStorage);

static AllocatedDenseTensorType get(ir::IrContext *ctx,
phi::Place place,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
phi::Place place,
const phi::Place& place,

dialect::DenseTensorType type) {
return ir::TypeManager::template get<AllocatedDenseTensorType>(
ctx, place, type);
}

static AllocatedDenseTensorType get(ir::IrContext *ctx,
phi::Place place,
ir::Type dtype,
phi::DDim dims,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,这里的参数有些应该是const & 的?另外,参数的顺序是否有偏好要求,这个看起来是一个底层数据Type。

phi::DataLayout layout,
phi::LoD lod,
size_t offset) {
dialect::DenseTensorType dense_tensor_type =
dialect::DenseTensorType::get(ctx, dtype, dims, layout, lod, offset);

return ir::TypeManager::template get<AllocatedDenseTensorType>(
ctx, place, dense_tensor_type);
}

const phi::Place &place() const;

const ir::Type &dtype() const;

const phi::DDim &dims() const;

const phi::DataLayout &data_layout() const;

const phi::LoD &lod() const;

const size_t &offset() const;
};

} // namespace dialect
} // namespace paddle
92 changes: 92 additions & 0 deletions paddle/fluid/ir/dialect/pd_kernel_type_storage.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include <type_traits>

#include "paddle/fluid/ir/dialect/pd_type.h"
#include "paddle/ir/core/type.h"
#include "paddle/ir/core/utils.h"
#include "paddle/phi/core/tensor_meta.h"

namespace paddle {
namespace dialect {
///
/// \brief Define Parametric TypeStorage for AllocatedDenseTensorType.
///
/// NOTE(zhangbo9674): The derived TypeStorage class needs to implement the
/// following methods: (1)declare ParamKey, (2)define Construction method,
/// (3)define HashValue method, (4)overload operator==.
///
struct AllocatedDenseTensorTypeStorage : public ir::TypeStorage {
using Place = phi::Place;
///
/// \brief Declare ParamKey according to parameter type.
///
using ParamKey = std::tuple<phi::Place, dialect::DenseTensorType>;

AllocatedDenseTensorTypeStorage(phi::Place place,
dialect::DenseTensorType type)
: place_(place), dense_tensor_type_(type) {}

///
/// \brief Each derived TypeStorage must define a Construct method, which
/// StorageManager uses to construct a derived TypeStorage.
///
static AllocatedDenseTensorTypeStorage *Construct(ParamKey key) {
return new AllocatedDenseTensorTypeStorage(std::get<0>(key),
std::get<1>(key));
}

///
/// \brief Each derived TypeStorage must provide a HashValue method.
///
static std::size_t HashValue(const ParamKey &key) {
std::size_t hash_value = 0;
// hash place
hash_value = ir::hash_combine(hash_value, std::get<0>(key).HashValue());

// hash dtype
auto dense_tensor_type = std::get<1>(key);
hash_value = ir::hash_combine(hash_value,
dialect::DenseTensorTypeStorage::HashValue(
dialect::DenseTensorTypeStorage::ParamKey(
dense_tensor_type.dtype(),
dense_tensor_type.dims(),
dense_tensor_type.data_layout(),
dense_tensor_type.lod(),
dense_tensor_type.offset())));
return hash_value;
}

///
/// \brief Each derived TypeStorage needs to overload operator==.
///
bool operator==(const ParamKey &key) const {
return ParamKey(place_, dense_tensor_type_) == key;
}

ParamKey GetAsKey() const { return ParamKey(place_, dense_tensor_type_); }

///
/// \brief AllocatedDenseTensorTypeStorage include five parameters: place,
/// DenseTensorType
///
phi::Place place_;
dialect::DenseTensorType dense_tensor_type_;
};

} // namespace dialect
} // namespace paddle
9 changes: 9 additions & 0 deletions paddle/fluid/ir/dialect/pd_type_storage.h
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,15 @@ struct DenseTensorTypeStorage : public ir::TypeStorage {
return ParamKey(dtype_, dims_, layout_, lod_, offset_) == key;
}

bool operator==(const DenseTensorTypeStorage &storage) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个接口在哪里有需求么?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是AllocatedTensorTypeStrorage 依赖了 DenseTensorTypeStorage ,我一起删了

return ParamKey(dtype_, dims_, layout_, lod_, offset_) ==
ParamKey(storage.dtype_,
storage.dims_,
storage.layout_,
storage.lod_,
storage.offset_);
}

ParamKey GetAsKey() const {
return ParamKey(dtype_, dims_, layout_, lod_, offset_);
}
Expand Down
Loading