Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convolution operator #4042

Merged
merged 22 commits into from
Sep 21, 2017
Merged

Convolution operator #4042

merged 22 commits into from
Sep 21, 2017

Conversation

hedaoyuan
Copy link
Contributor

No description provided.

from paddle.v2.framework.op import Operator


class TestConv2dOp(unittest.TestCase):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator python test framework has been refactored, please merge develop branch and change the unit test accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

using Tensor = framework::Tensor;

template <typename Place, typename T>
class GemmConvKernel : public framework::OpKernel {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll write the 3D convolution later. Should we distinguish the names? GemmConvKernel->GemmConv2DKernel, GemmConvGradKernel -> GemmConv2dGradKernel, gemm_conv_op.h->gemm_conv2d_op.h, conv_op.cu->conv2d_op.cu

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

namespace paddle {
namespace operators {

int outputSize(int input_size, int filter_size, int padding, int stride) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is also used in conv3d, pooling2d, pooling3d. Should it be written in one place?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be fixed in the next PR. At present, it is not sure where to put this function is better.

REGISTER_OP(conv2d, ops::Conv2DOp, ops::Conv2DOpMaker, conv2d_grad,
ops::Conv2DOpGrad);

REGISTER_OP_CPU_KERNEL(conv2d,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current build system requires the filename matches the registered operator name. Maybe rename them both to conv or conv2d.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}
};

class Conv2DOpMaker : public framework::OpProtoAndCheckerMaker {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we put Conv2DOpMaker and CPU implementation in a base class like ConvBase? so that CUDA gemm implementation and cudnn implementation can reuse the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we do not need to write a Conv2DOpMaker for CudnnConv.
CudnnConv also can use the Conv2DOpMaker class.

auto t1 = framework::EigenVector<T>::Flatten(filter_grad);
t1.device(context.GetEigenDevice<Place>()) = t1.constant(static_cast<T>(0));
auto t2 = framework::EigenVector<T>::Flatten(*input_grad);
t2.device(context.GetEigenDevice<Place>()) = t2.constant(static_cast<T>(0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the gradient be cleared here? The weights entered between different Op may be shared.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The weights entered between different Op may be shared.

If the weights are shared, the framework is responsible for merge the two parts of the gradients.
The gradient tensor in other op is also cleared.
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/lookup_table_op.h#L60

int input_channels = in->dims()[1];
int output_channels = filter->dims()[0];

PADDLE_ENFORCE_EQ(in->dims().size(), 4, "Conv2DOp intput should be 4-D.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intput -> input

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

void InferShape(const framework::InferShapeContext &ctx) const override {
auto in = ctx.Input<Tensor>("Input");
auto filter = ctx.Input<Tensor>("Filter");
auto out = ctx.Output<Tensor>("Output");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Output<Tensor> -> Output<LoDTensor>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

)DOC");
AddAttr<std::vector<int>>("strides", "strides of convolution operator.");
AddAttr<std::vector<int>>("paddings", "paddings of convolution operator.");
AddAttr<int>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put the attr before doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto in = ctx.Input<Tensor>("Input");
auto filter = ctx.Input<Tensor>("Filter");
auto d_in = ctx.Output<Tensor>(framework::GradVarName("Input"));
auto d_filter = ctx.Output<Tensor>(framework::GradVarName("Filter"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Output< framework::LoDTensor>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

"when group=2, the first half of the filters are only connected to the "
"first half of the input channels, and the second half only connected "
"to the second half.")
.SetDefault(1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put AddAttr before AddComment(R"DOC )DOC").

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

context.Input<Tensor>(framework::GradVarName("Output"));
Tensor* input_grad =
context.Output<Tensor>(framework::GradVarName("Input"));
Tensor* filter_grad_ =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

filter_grad_ -> filter_grad

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个不行,后面定义了一个filter_grad变量。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto t1 = framework::EigenVector<T>::Flatten(filter_grad);
t1.device(context.GetEigenDevice<Place>()) = t1.constant(static_cast<T>(0));
auto t2 = framework::EigenVector<T>::Flatten(*input_grad);
t2.device(context.GetEigenDevice<Place>()) = t2.constant(static_cast<T>(0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why need to zero memory for input_grad?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember talking to you about this problem, and your suggestion is that operator needs = operation instead of += operation.

Tensor filter_grad_slice =
filter_grad.Slice<T>(g * out_step, (g + 1) * out_step);
math::matmul<Place, T>(out_grad_slice, false, col_matrix, true, T(1.0),
&filter_grad_slice, T(1.0), device_context);
Copy link
Contributor

@qingqing01 qingqing01 Sep 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


2.  `gradient w.r.t. weight` 和 `gradient w.r.t. input data` 的计算,是否需要提出来两个函数?

3.  需要考虑 `gradient w.r.t. weight`  或者  `gradient w.r.t. input data` 不计算的情况,类似 mul_op的情况: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/mul_op.h#L76

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Backward对于gradient w.r.t. input data的计算和通常的实现(Paddle老的,Caffe)不同,是否需要说明下?

这个不同指的是什么?

  1. gradient w.r.t. weight 和 gradient w.r.t. input data 的计算,是否需要提出来两个函数?

目前来看,提出来两个函数好像并没有什么用,caffe2里面也并没有去提出两个函数。

Copy link
Contributor

@qingqing01 qingqing01 Sep 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个不同指的是什么?

上面看comment错了,可以忽略吧 :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. ... Done.

frowid][fcolid]
output_value += input_value * filter_value
output[batchid][outchannelid][rowid][
colid] = output_value
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以试一下,不过这个好像不支持group。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不能用np.convolve
'Returns the discrete, linear convolution of two one-dimensional sequences.'

self.check_output()

def test_check_grad(self):
self.check_grad(set(['Input', 'Filter']), 'Output')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要类似mul_op一样的check,检车Input或者Filter不需要Grad的情况: https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/framework/tests/test_mul_op.py#L49

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@qingqing01
Copy link
Contributor

另外, convolution operator是否需要考虑加bias?

@Xreki
Copy link
Contributor

Xreki commented Sep 18, 2017

Paddle的Layer里面,应该都是支持多输入和activation的,Op需要支持吗?

@hedaoyuan
Copy link
Contributor Author

暂时不考虑在GemmConv2DKernel里面支持bias, 多输入,activation等。跟这个GemmConv2DKernel一样的还会有一个CudnnConv2DKernel也不会支持bias,多输入,activation这些。后续可以考虑一个更上层的OpConvLayer,来支持bias,多输入这些。

@typhoonzero typhoonzero mentioned this pull request Sep 19, 2017
3 tasks
Copy link
Contributor

@typhoonzero typhoonzero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM++

@hedaoyuan hedaoyuan merged commit 7a891a3 into PaddlePaddle:develop Sep 21, 2017
@chengduoZH
Copy link
Contributor

fix #3691

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants