-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add uniform random operator #3293
Add uniform random operator #3293
Conversation
It can be run both CPU/GPU. configure attributes are: * min: the min value of uniform random * max: the max value of uniform random * dims: the dimension of output tensor * seed: the random seed of uniform random. 0 means generate a seed each time.
d946e18
to
e376bda
Compare
@@ -38,9 +38,10 @@ cc_test(backward_test SRCS backward_test.cc DEPS backward) | |||
cc_library(paddle_pybind SHARED | |||
SRCS pybind.cc | |||
DEPS pybind python backward | |||
fc_op |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use space
for indent.
cfcb03b
to
8a8f956
Compare
#include <thrust/iterator/counting_iterator.h> | ||
#include <thrust/random.h> | ||
#include <thrust/transform.h> | ||
#include "paddle/operators/type_alias.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
may be we need to remove type_alias header declaration from now on.
|
||
template <typename T> | ||
struct UniformGenerator { | ||
T min_, max_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@reyoung found eigen built-in random generator easy to crash. we cannot point out the reason. should we put a TODO here?
Not a good news, we need to implement more generator in thrust by ourselves such as random_gama
, ramdom_guassian
tf.random_normal
tf.truncated_normal
tf.random_uniform
tf.random_shuffle
tf.random_crop
tf.multinomial
tf.random_gamma
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. TODO is added.
In Eigen, only normal
and uniform
two generators are supported. i.e., Eigen is not good for randomizing a tensor.
AddAttr<std::vector<int>>("dims", "the dimension of random tensor"); | ||
AddAttr<float>("min", "Minimum value of uniform random").SetDefault(-1.0f); | ||
AddAttr<float>("max", "Maximun value of uniform random").SetDefault(1.0f); | ||
AddAttr<int>("seed", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we bind the seed
to the device context or environment?
In my view, the user may set a random seed before doing any experiment. He will not set a random seed in every random operator function calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that logic should be handled in a higher level of Paddle.
In normal situation, user does not need set seed, just set them to zero, Paddle will generate a seed from std::random_device
AddAttr<float>("min", "Minimum value of uniform random").SetDefault(-1.0f); | ||
AddAttr<float>("max", "Maximun value of uniform random").SetDefault(1.0f); | ||
AddAttr<int>("seed", | ||
"Random seed of uniform random. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
further more, each stream in GPU needs a different seed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each stream in GPU's seed is decided outside operator.
56fb666
to
d7f0eb6
Compare
|
||
namespace paddle { | ||
namespace operators { | ||
class RandomOp : public OperatorWithKernel { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RamdomOp
is too general, maybe rename with UniformRandomOp
is better? I have implemented a Gaussian random op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
void InferShape(const InferShapeContext &ctx) const override { | ||
PADDLE_ENFORCE(GetAttr<float>("min") < GetAttr<float>("max"), | ||
"uniform_random's min must less then max"); | ||
auto tensor = ctx.Output<Tensor>(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename auto
withauto*
for clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM++
void InferShape(const framework::InferShapeContext& ctx) const override { | ||
PADDLE_ENFORCE(GetAttr<float>("min") < GetAttr<float>("max"), | ||
"uniform_random's min must less then max"); | ||
auto tensor = ctx.Output<framework::Tensor>(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use auto *
instead of auto
for clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
It can be run both CPU/GPU. configure attributes are:
time.