-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AMP OP&Test] Mean fp/bf 16 support #51114
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
return | ||
place = paddle.CUDAPlace(0) | ||
self.check_grad_with_place( | ||
place, ['X'], ['Out'], numeric_grad_delta=0.05 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numeric_grad_delta
无需设置
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
需要设置,否则数值解是错误的
@@ -212,6 +209,48 @@ def test_check_grad(self): | |||
np.testing.assert_array_equal(dx, dx_expected) | |||
|
|||
|
|||
class TestReduceMeanBF16Op(OpTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bf16测例数量太少,应与fp16测例数量对齐
@@ -325,6 +398,11 @@ def set_attrs(self): | |||
self.dtype = 'float16' | |||
|
|||
|
|||
class TestReduceMeanOpReduceAllTrue2BF16(TestReduceMeanBF16Op): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True2
->True
pass | ||
|
||
def test_check_output(self): | ||
if not core.is_compiled_with_cuda(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
和类头部的skipif只需要保留一个就好
@@ -92,7 +92,9 @@ def test_errors(self): | |||
|
|||
|
|||
@unittest.skipIf( | |||
not core.is_compiled_with_cuda(), "core is not compiled with CUDA" | |||
not core.is_compiled_with_cuda() | |||
or not core.is_bfloat16_supported(core.CUDAPlace(0)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里不用改吧,这里不是reduce_mean的单测
@@ -149,6 +151,9 @@ def ref_reduce_mean_grad(x, axis, dtype, reduce_all): | |||
return (1.0 / np.prod(shape) * np.ones(shape)).astype(dtype) | |||
|
|||
|
|||
@unittest.skipIf( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里也不用加,下面的实现都对fp16做了判断的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
Others
Describe
add fp16/bf16 support 4 mean