-
Notifications
You must be signed in to change notification settings - Fork 5.6k
ShareDataWith and ShareBufferWith are prohibited in OP(English Version)
Contents
- Section 1 Background
- Section 2 Modification of ShareDataWith or ShareBufferWith
- Section 3 Relevant Description of CI
Additional Instructions: During the implementation process, there will be some specifications which haven't be considered before, and need to be continuously supplemented and improved during the process. Hopefully you can give us some feedbacks.
At present, there are some RDs(Research and Development engineer) using method ShareDataWith
on operator's input and output( Output = ShareDataWith(Input)
). This operation may cause the following errors:
-
This operation is equivalent to creating a hidden edge in the operator graph, and this edge links the input and the output. It is hard to express this hidden edge in graph analysis, causing the error of graph optimization.
-
This operation is equivalent to operating
inplace
on internal Op, it possibly results in problems in memory release.(For more information, please)
Automatically, the Framework will check different Ops whether they can operate inplace
or not. As a result, the operations of ShareDataWith
or ShareBufferWith
on input/output are not allowed in internal Op.
Method ShareDateWith
is using in some op files, such as lod_reset_op.h, here only the code content concerning the sharedatawith
method is shown here.
template <typename DeviceContext, typename T>
class LoDResetGradKernel : public framework::OpKernel<T> {
public:
void Compute(const framework::ExecutionContext& ctx) const {
auto* d_out = ctx.Input<framework::Tensor>(framework::GradVarName("Out"));
auto* d_x = ctx.Output<framework::Tensor>(framework::GradVarName("X"));
d_x->ShareDataWith(*d_out);
}
};
The operations of ShareDataWith
or ShareBufferWith
on input/output are not allowed in internal Op, but you can use the method framework::TensorCopy
as a replacement. The above code can be changed to as follows:
template <typename DeviceContext, typename T>
class LoDResetGradKernel : public framework::OpKernel<T> {
public:
void Compute(const framework::ExecutionContext& ctx) const {
auto* d_out = ctx.Input<framework::Tensor>(framework::GradVarName("Out"));
auto* d_x = ctx.Output<framework::Tensor>(framework::GradVarName("X"));
framework::TensorCopy(*d_out, d_out->place(), d_x);
}
};
At present, this specification has been checked in PR_CI_CPU_Py2
. This check would fail if the method ShareDataWith
or ShareBufferWith
were used in modified op files. thus appearing a similar error message in buildlog as follows:
**********************
Using ShareDataWith or ShareBufferWith is not recommended. You must have one RD's (zhhsplendid (Recommend), sneaxiy or luotao1 or lanxianghit) approval to use these methods. For more information, please refer to https://github.com/PaddlePaddle/Paddle/wiki/ShareDataWith-is-prohibited-in-OP. The error lines are as follows:
paddle/fluid/operators/fill_op.h
+ tensor.ShareDataWith(out)
These are 1 approval errors.
**********************
Please modified the relevant code according to the error message, in order to realize the specification of
forbidding rules to tensor:: sharedatawith
and tensor:: sharebufferwith
methods in internal Op. If you want
merge this pull request(PR), please check with the relevant approving officer (there is a list of approving
officers in CI buildlog) and one approval is required at least.
If you have any problem, please don't hesitate to contact @guofei