-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[API2.0]Unify pooling function and add adaptive max pooling function #26483
[API2.0]Unify pooling function and add adaptive max pooling function #26483
Conversation
Thanks for your contribution! |
… add_adaptive_max_pooling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- double check
data_format
- double check
use_cudnn
, also document reasoning behinduse_cudnn
default value choice. - also for future references, it would be nice to have some preliminary benchmark on 1d pooling in PR description
'SAME' which is the padding algorithm. If pool padding size is a tuple or list, | ||
it could be the following forms: `[pad_left, pad_right]`. If padding is non-zero, | ||
then the input is implicitly zero-padded on both sides for padding number of points. | ||
it must contain an integer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
integers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1d is an integer
|
||
padding = update_padding1d(padding, "avg") | ||
# use 2d to implenment 1d should expand padding in advance. | ||
padding = _expand_low_nd_padding(padding) | ||
|
||
if in_dygraph_mode(): | ||
output = core.ops.pool2d( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use_cudnn
is different?
also, document the reasoning of use_cudnn
default value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like a merged bug https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/nn/functional/pooling.py#L247 , someone mixed up use_cudnn
and exclusive
params, I should and already correct it in the next commit.
ceil_mode=self.ceil_mode, | ||
count_include_pad=self.count_include_pad, | ||
divisor_override=self.divisor, | ||
data_format=self.data_format, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data_format
works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
@@ -146,99 +168,69 @@ def avg_pool1d(x, | |||
count_include_pad=True, | |||
ceil_mode=False, | |||
name=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing data format?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It supposes to add in this PR #26331, hmm I suggest refining it in the next PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check weather custom pooling implementation is faster than cudnn for adaptive/global
- if both faster, set
use_cudnn
to false for adaptive and global - if global is faster, set
global_pooling
to true anduse_cudnn
to false for adaptive pooling with output size 1
I will compare the performance of please feel free to review it again while CI is passing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need follow up for data format and cudnn, otherwise LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG API
return int(np.ceil((index + 1) * input_size / output_size)) | ||
|
||
|
||
def avg_pool1D_forward_naive(x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1D -> 1d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for core.ops
PR types
New features
PR changes
APIs
Describe
This PR
pooling
function, extract common function 1._update_padding_nd
which is consistent withconv
function,2. _expand_low_nd_padding
to implement low dimension function by calling high dimension function(unsqueeze nd to n+1 d)Shape
field in layer/pooling.pyx
instead ofinput
,out
instead ofoutput
conv
function.Example: