Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add uniform random operator #3293

Merged
merged 10 commits into from
Aug 8, 2017
13 changes: 7 additions & 6 deletions paddle/framework/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,10 @@ cc_test(backward_test SRCS backward_test.cc DEPS backward)
cc_library(paddle_pybind SHARED
SRCS pybind.cc
DEPS pybind python backward
fc_op
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use space for indent.

sgd_op
add_op
mean_op
cross_entropy_op
recurrent_op)
fc_op
sgd_op
add_op
mean_op
cross_entropy_op
recurrent_op
uniform_random_op)
1 change: 1 addition & 0 deletions paddle/framework/pybind.cc
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ USE_OP(sigmoid);
USE_OP(softmax);
USE_OP(rowwise_add);
USE_OP_WITHOUT_KERNEL(recurrent_op);
USE_OP(uniform_random);
namespace paddle {
namespace framework {
template <typename ClassType>
Expand Down
2 changes: 2 additions & 0 deletions paddle/operators/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,3 +66,5 @@ op_library(fc_op
op_library(recurrent_op SRCS recurrent_op.cc rnn/recurrent_op_utils.cc
DEPS op_desc tensor op_registry operator net_op)
cc_test(recurrent_op_test SRCS recurrent_op_test.cc DEPS recurrent_op gtest mul_op add_op)
op_library(uniform_random_op
SRCS uniform_random_op.cc uniform_random_op.cu)
53 changes: 53 additions & 0 deletions paddle/operators/uniform_random_op.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/operators/uniform_random_op.h"

namespace paddle {
namespace operators {
class RandomOp : public OperatorWithKernel {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RamdomOp is too general, maybe rename with UniformRandomOp is better? I have implemented a Gaussian random op.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

protected:
void InferShape(const InferShapeContext &ctx) const override {
PADDLE_ENFORCE(GetAttr<float>("min") < GetAttr<float>("max"),
"uniform_random's min must less then max");
auto tensor = ctx.Output<Tensor>(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename auto withauto* for clear.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

auto dims = GetAttr<std::vector<int>>("dims");
tensor->Resize(framework::make_ddim(dims));
}
};

class RandomOpMaker : public OpProtoAndCheckerMaker {
public:
RandomOpMaker(OpProto *proto, OpAttrChecker *op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddOutput("Out", "The output tensor of uniform random op");
AddComment(R"DOC(Uniform random operator.

Used to initialize tensor with uniform random generator.
)DOC");
AddAttr<std::vector<int>>("dims", "the dimension of random tensor");
AddAttr<float>("min", "Minimum value of uniform random").SetDefault(-1.0f);
AddAttr<float>("max", "Maximun value of uniform random").SetDefault(1.0f);
AddAttr<int>("seed",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we bind the seed to the device context or environment?
In my view, the user may set a random seed before doing any experiment. He will not set a random seed in every random operator function calls.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that logic should be handled in a higher level of Paddle.

In normal situation, user does not need set seed, just set them to zero, Paddle will generate a seed from std::random_device

"Random seed of uniform random. "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

further more, each stream in GPU needs a different seed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each stream in GPU's seed is decided outside operator.

"0 means generate a seed by system")
.SetDefault(0);
}
};
} // namespace operators
} // namespace paddle

REGISTER_OP(uniform_random, ops::RandomOp, ops::RandomOpMaker);
REGISTER_OP_CPU_KERNEL(uniform_random,
ops::UniformRandomKernel<ops::CPUPlace, float>);
18 changes: 18 additions & 0 deletions paddle/operators/uniform_random_op.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/operators/uniform_random_op.h"

REGISTER_OP_GPU_KERNEL(uniform_random,
ops::UniformRandomKernel<ops::GPUPlace, float>);
39 changes: 39 additions & 0 deletions paddle/operators/uniform_random_op.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#pragma once
#include "paddle/operators/type_alias.h"
namespace paddle {
namespace operators {

template <typename Place, typename T>
class UniformRandomKernel : public OpKernel {
public:
void Compute(const ExecutionContext &context) const override {
auto tensor = context.Output<Tensor>(0);
tensor->mutable_data<T>(context.GetPlace());

auto eigenTensor = EigenVector<T>::Flatten(*tensor);
auto dev = context.GetEigenDevice<Place>();
auto min = context.op_.GetAttr<float>("min");
auto max = context.op_.GetAttr<float>("max");
auto seed = static_cast<uint64_t>(context.op_.GetAttr<int>("seed"));
auto diff = max - min;
Eigen::internal::UniformRandomGenerator<T> gen(seed);
eigenTensor.device(dev) = eigenTensor.random(gen) * diff + min;
}
};

} // namespace operators
} // namespace paddle
3 changes: 2 additions & 1 deletion python/paddle/v2/framework/tests/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ add_python_test(test_framework
test_softmax_op.py
test_rowwise_add_op.py
test_network.py
gradient_checker.py)
gradient_checker.py
test_uniform_random_op.py)
35 changes: 35 additions & 0 deletions python/paddle/v2/framework/tests/test_uniform_random_op.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import unittest
from paddle.v2.framework.op import Operator
import paddle.v2.framework.core as core
import numpy


class UniformRandomTest(unittest.TestCase):
def test_uniform_random_cpu(self):
self.uniform_random_test(place=core.CPUPlace())

def test_uniform_random_gpu(self):
if core.is_compile_gpu():
self.uniform_random_test(place=core.GPUPlace(0))

def uniform_random_test(self, place):
scope = core.Scope()
scope.new_var("X").get_tensor()

op = Operator(
"uniform_random",
Out="X",
dims=[1000, 784],
min=-5.0,
max=10.0,
seed=10)

op.infer_shape(scope)
ctx = core.DeviceContext.create(place)
op.run(scope, ctx)
tensor = numpy.array(scope.find_var("X").get_tensor())
self.assertAlmostEqual(tensor.mean(), 2.5, delta=0.1)


if __name__ == '__main__':
unittest.main()