Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add cross_entropy to nn/layer and nn/functional, test=develop #26478

Merged
merged 4 commits into from
Aug 21, 2020

Conversation

chajchaj
Copy link
Contributor

PR types

New features

PR changes

APIs

Describe

add cross_entropy to nn/layer and nn/functional

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

\\log\\left(\\sum_{i=0}^{K}\\exp(\\text{input}_i)\\right)), j = 1,..., K

Parameters:
input (Variable): Input tensor, the data type is float32, float64. Shape is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor

input (Variable): Input tensor, the data type is float32, float64. Shape is
(N, C), where C is number of classes, and if shape is more than 2D, this
is (N, C, D1, D2,..., Dk), k >= 1.
label (Variable): Label tensor, the data type is int64. Shape is (N), where each
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor

label (Variable): Label tensor, the data type is int64. Shape is (N), where each
value is 0 <= label[i] <= C-1, and if shape is more than 2D, this is
(N, D1, D2,..., Dk), k >= 1.
weight (Variable, optional): Weight tensor, a manual rescaling weight given
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor

Returns:
The tensor variable storing the cross_entropy_loss of input and label.

Return type: Variable.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor

@@ -22,7 +22,7 @@
from ...fluid.layers.nn import _elementwise_op_in_dygraph
from ...fluid.layers import bpr_loss #DEFINE_ALIAS
from ...fluid.layers import center_loss #DEFINE_ALIAS
from ...fluid.layers import cross_entropy #DEFINE_ALIAS
#from ...fluid.layers import cross_entropy #DEFINE_ALIAS
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete

@@ -117,7 +117,7 @@ class CrossEntropyLoss(fluid.dygraph.Layer):
print(output.numpy())
"""

def __init__(self, weight=None, reduction='mean', ignore_index=-100):
def __init__(self, weight=None, ignore_index=-100, reduction='mean'):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use functional version in forward()?

Copy link
Contributor

@willthefrog willthefrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@willthefrog willthefrog merged commit 5407e32 into PaddlePaddle:develop Aug 21, 2020
@jzhang533
Copy link
Contributor

cross_entropy忘了加name参数吧?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants