Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.3】为 Paddle 新增 masked_fill API #57355

Merged
merged 34 commits into from
Nov 2, 2023
Merged
Show file tree
Hide file tree
Changes from 31 commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
4925ae4
add masked_fill for paddle
AndSonder Sep 15, 2023
fbec78a
update doc
AndSonder Sep 15, 2023
3cfde41
update some test case
AndSonder Sep 15, 2023
37dd891
remove full_like
AndSonder Sep 15, 2023
793fcdc
update test codes
AndSonder Sep 15, 2023
0c1e8ce
update test cases
AndSonder Sep 16, 2023
faa459a
recover codes
AndSonder Sep 16, 2023
0a4d8d4
update test codes
AndSonder Sep 19, 2023
746a818
fix gradients error
AndSonder Sep 20, 2023
d48d73d
update test codes
AndSonder Sep 20, 2023
7a0ef26
fix
AndSonder Sep 21, 2023
de8cd75
add bf16 test cases
AndSonder Sep 21, 2023
fb1a2ea
update code-block
AndSonder Sep 25, 2023
810575f
Merge branch 'masked_fill' of https://github.com/AndSonder/Paddle int…
AndSonder Sep 25, 2023
1b213bf
Merge branch 'develop' into masked_fill
AndSonder Sep 25, 2023
df6013e
update code-block
AndSonder Sep 25, 2023
82bb307
Merge branch 'masked_fill' of https://github.com/AndSonder/Paddle int…
AndSonder Sep 25, 2023
1349206
update test codes
AndSonder Sep 26, 2023
9ef08e4
Merge branch 'develop' into masked_fill
AndSonder Oct 3, 2023
5f463ad
Update __init__.py
AndSonder Oct 3, 2023
fe74eff
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
AndSonder Oct 3, 2023
4d394a0
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
AndSonder Oct 3, 2023
211d655
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
AndSonder Oct 8, 2023
0703cd0
fix
AndSonder Oct 8, 2023
1c0abb8
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
AndSonder Oct 11, 2023
f9bdfa0
fix code style and recover third_party
AndSonder Oct 11, 2023
f5db8f6
add v grad check
AndSonder Oct 11, 2023
0f48c13
add scalar value case
AndSonder Oct 11, 2023
c8040ce
fix test case
AndSonder Oct 12, 2023
ccb535c
use logical_not
AndSonder Oct 18, 2023
2317773
Merge branch 'develop' into masked_fill
AndSonder Oct 18, 2023
f678e39
fix doc style
AndSonder Oct 20, 2023
d6b7bd7
Update manipulation.py
AndSonder Oct 23, 2023
bccd1f6
Merge branch 'develop' into masked_fill
AndSonder Oct 24, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions python/paddle/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -252,6 +252,8 @@
view,
view_as,
unfold,
masked_fill,
masked_fill_,
)

from .tensor.math import ( # noqa: F401
Expand Down Expand Up @@ -906,6 +908,8 @@
'i1e',
'polygamma',
'polygamma_',
'masked_fill',
'masked_fill_',
'hypot',
'hypot_',
]
4 changes: 4 additions & 0 deletions python/paddle/tensor/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,8 @@
from .manipulation import view # noqa: F401
from .manipulation import view_as # noqa: F401
from .manipulation import unfold # noqa: F401
from .manipulation import masked_fill # noqa: F401
from .manipulation import masked_fill_ # noqa: F401
from .math import abs # noqa: F401
from .math import abs_ # noqa: F401
from .math import acos # noqa: F401
Expand Down Expand Up @@ -694,6 +696,8 @@
'i1e',
'polygamma',
'polygamma_',
'masked_fill',
'masked_fill_',
'atan2',
'diagflat',
'multinomial',
Expand Down
68 changes: 68 additions & 0 deletions python/paddle/tensor/manipulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -4561,6 +4561,74 @@ def moveaxis(x, source, destination, name=None):
return out


def masked_fill(x, mask, value, name=None):
"""
Fills elements of self tensor with value where mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor.

Args:
x (Tensor) : The Destination Tensor. Supported data types are float,
double, int, int64_t,float16 and bfloat16.
mask (Tensor): The boolean tensor indicate the position to be filled.
The data type of mask must be bool.
value (Scalar or 0-D Tensor): The value used to fill the target tensor.
Supported data types are float, double, int, int64_t,float16 and bfloat16.
name(str, optional): The default value is None. Normally there is no
need for user to set this property. For more information, please
refer to :ref:`api_guide_Name`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

参数说明中写明支持的dtype,且要和设计文档中的一致; Scaler -> Scalar

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
x (Tensor) : The Destination Tensor. Supported data types are float,
double, int, int64_t,float16 and bfloat16.
mask (Tensor): The boolean tensor indicate the position to be filled.
The data type of mask must be bool.
value (Scalar or 0-D Tensor): The value used to fill the target tensor.
Supported data types are float, double, int, int64_t,float16 and bfloat16.
name(str, optional): The default value is None. Normally there is no
need for user to set this property. For more information, please
refer to :ref:`api_guide_Name`.
x (Tensor) : The Destination Tensor. Supported data types are float,
double, int, int64_t,float16 and bfloat16.
mask (Tensor): The boolean tensor indicate the position to be filled.
The data type of mask must be bool.
value (Scalar or 0-D Tensor): The value used to fill the target tensor.
Supported data types are float, double, int, int64_t,float16 and bfloat16.
name(str, optional): The default value is None. Normally there is no
need for user to set this property. For more information, please
refer to :ref:`api_guide_Name`.
image


Returns:
Tensor, same dimention and dtype with x.
Examples:
.. code-block:: python
>>> # doctest: +REQUIRES(env:GPU)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code-block 下加空行

Suggested change
.. code-block:: python
>>> # doctest: +REQUIRES(env:GPU)
.. code-block:: python
>>> # doctest: +REQUIRES(env:GPU)
image

>>> import paddle
>>> x = paddle.ones((3, 3), dtype="float32")
>>> mask = paddle.to_tensor([[True, True, False]])
>>> print(mask)
Tensor(shape=[1, 3], dtype=bool, place=Place(gpu:0), stop_gradient=True,
[[True , True , False]])
>>> out = paddle.masked_fill(x, mask, 2)
>>> print(out)
Tensor(shape=[3, 3], dtype=float32, place=Place(gpu:0), stop_gradient=True,
[[2., 2., 1.],
[2., 2., 1.],
[2., 2., 1.]])
"""
if np.isscalar(value):
value = paddle.full([], value, x.dtype)

mask = paddle.logical_not(mask)
out = paddle.where(mask, x, value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是否可以把value和x对调,省去一个not op操作

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里应该是不能对调的,paddle.where(cond, x, y) 在 inplace 的时候对boardcast的处理如下:

        zeros_like_x = paddle.zeros_like(x)
        zeros_like_y = paddle.zeros_like(y)
        zeros_like_condition = paddle.zeros_like(condition)
        zeros_like_condition = paddle.cast(zeros_like_condition, x.dtype)
        cast_cond = paddle.cast(condition, x.dtype)

        broadcast_zeros = paddle.add(zeros_like_x, zeros_like_y)
        broadcast_zeros = paddle.add(broadcast_zeros, zeros_like_condition)
        broadcast_x = x.add_(broadcast_zeros)
        broadcast_y = paddle.add(y, broadcast_zeros)
        broadcast_condition = paddle.add(cast_cond, broadcast_zeros)
        broadcast_condition = paddle.cast(broadcast_condition, 'bool')

其中 broadcast_x = x.add_(broadcast_zeros) 用了 inplace 的操作,如果调换位置就会导致 masked_fill 和 mask_fill_ 运行结果不一致问题。broadcast 也会出问题。 我一开始写的时候写的时候也是用的 paddle.where(mask, value, x) 但是跑单侧的时候就很多报错,inplace 和 非 inplace 的梯度信息也不一样

return out


@inplace_apis_in_dygraph_only
def masked_fill_(x, mask, value, name=None):
"""
Inplace version of ``masked_fill`` API, the output Tensor will be inplaced with input ``x``.
Please refer to :ref:`api_paddle_masked_fill`.

Examples:
.. code-block:: python
>>> # doctest: +REQUIRES(env:GPU)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
.. code-block:: python
>>> # doctest: +REQUIRES(env:GPU)
.. code-block:: python
>>> # doctest: +REQUIRES(env:GPU)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all done

>>> import paddle
>>> x = paddle.ones((3, 3), dtype="float32")
>>> mask = paddle.to_tensor([[True, False, False]])
>>> out = paddle.masked_fill_(x, mask, 2)
>>> print(out)
Tensor(shape=[3, 3], dtype=float32, place=Place(gpu:0), stop_gradient=True,
[[2., 1., 1.],
[2., 1., 1.],
[2., 1., 1.]])
"""
if np.isscalar(value):
value = paddle.full([], value, x.dtype)

mask = paddle.logical_not(mask)
out = paddle.where_(mask, x, value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,是否可以通过调换x , value 来避免额外的not操作

return out


def non_negative_axis(arr, axis):
ndim = len(arr.shape)
if axis >= 0:
Expand Down
66 changes: 66 additions & 0 deletions test/legacy_test/test_inplace.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,72 @@ def test_backward_success_2(self):
np.testing.assert_array_equal(grad_var_a_inplace, grad_var_a)


class TestDygraphInplaceMaskedFill(TestDygraphInplace):
def non_inplace_api_processing(self, var):
return paddle.masked_fill(var, self.mask, self.value)

def inplace_api_processing(self, var):
return paddle.masked_fill_(var, self.mask, self.value)

def init_data(self):
self.dtype = "float32"
self.input_var_numpy = np.random.uniform(-5, 5, [30, 3])
self.value = np.random.uniform(-10, 10)
self.value = paddle.to_tensor(self.value, dtype=self.dtype)
self.mask = np.random.randint(0, 2, [30, 3]).astype('bool')
self.mask = paddle.to_tensor(self.mask, dtype='bool')

def test_forward_version(self):
with paddle.base.dygraph.guard():
var = paddle.to_tensor(self.input_var_numpy).astype(self.dtype)
self.assertEqual(var.inplace_version, 0)

inplace_var = self.inplace_api_processing(var)
self.assertEqual(var.inplace_version, 2)

inplace_var[0] = 2
self.assertEqual(var.inplace_version, 3)

inplace_var = self.inplace_api_processing(inplace_var)
self.assertEqual(var.inplace_version, 5)

def test_backward_error(self):
# It raises an error because the inplace operator will result
# in incorrect gradient computation.
with paddle.base.dygraph.guard():
var_a = paddle.to_tensor(self.input_var_numpy).astype(self.dtype)
var_a.stop_gradient = False

var_b = var_a**2

# Here, the gradient computation will use the value of var_b
var_c = var_b**2
self.inplace_api_processing(var_b)

loss = paddle.nn.functional.relu(var_c)
with self.assertRaisesRegex(
RuntimeError,
f"received tensor_version:{2} != wrapper_version_snapshot:{0}",
):
loss.backward()


class TestDygraphInplaceMaskedFill2(TestDygraphInplaceMaskedFill):
def non_inplace_api_processing(self, var):
return paddle.masked_fill(var, self.mask, self.value)

def inplace_api_processing(self, var):
return paddle.masked_fill_(var, self.mask, self.value)

def init_data(self):
self.dtype = "float32"
self.input_var_numpy = np.random.uniform(-5, 5, [30, 3])
self.value = np.random.uniform(-10, 10)
self.value = paddle.to_tensor(self.value, dtype=self.dtype)
self.mask = np.random.randint(0, 2, [30, 1]).astype('bool')
self.mask = paddle.to_tensor(self.mask, dtype='bool')


class TestDygraphInplaceWithContinuous(TestDygraphInplace):
def init_data(self):
self.input_var_numpy = np.random.uniform(-5, 5, [10, 20, 1])
Expand Down
Loading