Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addressing TAG review issue#133, 134, and 137 #144

Merged
merged 2 commits into from
Feb 21, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 15 additions & 33 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ interface ModelBuilder {
</script>

### batchNormalization ### {#api-modelbuilder-batchnorm}
Normalize the tensor values of input features across the batch dimension using [[Batch-Normalization]]. For each input feature, the mean and variance values of that feature are computed across the batch dimension during training.
Normalize the tensor values of input features across the batch dimension using [[Batch-Normalization]]. For each input feature, the mean and variance values of that feature supplied in this calculation as parameters are previously computed across the batch dimension of the input during the model training phrase of this operation.
<script type=idl>
dictionary BatchNormalizationOptions {
Operand scale;
Expand Down Expand Up @@ -508,6 +508,7 @@ partial interface ModelBuilder {
Operand floor(Operand x);
Operand log(Operand x);
Operand neg(Operand x);
Operand relu(Operand x);
Operand sigmoid(Operand x);
Operand sin(Operand x);
Operand tan(Operand x);
Expand All @@ -530,6 +531,16 @@ partial interface ModelBuilder {
- *floor*: Compute the floor of the input tensor, element-wise.
- *log*: Compute the natural logarithm of the input tensor, element-wise.
- *neg*: Compute the numerical negative value of the input tensor, element-wise.
- *relu*: Compute the [rectified linear function](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) of the input tensor, element-wise.
<div class="note">
The behavior of this operation can be generically emulated from the usage of
other operations as follow. However, user agents typically have a more
efficient implementation for it, therefore its usage is encouraged from the
performance standpoint.
<pre highlight="js">
return builder.max(builder.constant(0), x);
</pre>
</div>
- *sigmoid*: Compute the sigmoid function of the input tensor, element-wise.
- *sin*: Compute the sine of the input tensor, element-wise.
- *tan*: Compute the tangent of the input tensor, element-wise.
Expand Down Expand Up @@ -827,7 +838,7 @@ partial interface ModelBuilder {
</div>

### instanceNormalization ### {#api-modelbuilder-instancenorm}
Normalize the input features using [[Instance-Normalization]]. Unlike [[#api-modelbuilder-batchnorm]] where the mean and variance values are computed across the batch dimension during training, the mean and variance values of instance normalization are computed on the fly per input feature.
Normalize the input features using [[Instance-Normalization]]. Unlike [[#api-modelbuilder-batchnorm]] where the mean and variance values used in the calculation are previously computed across the batch dimension during the model training phrase, the mean and variance values used in the calculation of an instance normalization are computed internally on the fly per input feature.
<script type=idl>
dictionary InstanceNormalizationOptions {
Operand scale;
Expand Down Expand Up @@ -1128,35 +1139,6 @@ partial interface ModelBuilder {
- *SumSquare*: Compute the sum of the square of all the input values along the axes.
</div>

### relu ### {#api-modelbuilder-relu}
Calculate the [rectified linear](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) function on the input tensor element-wise. The calculation follows the expression `max(0, x)`.
<script type=idl>
partial interface ModelBuilder {
Operand relu(Operand x);
};
</script>
<div algorithm=relu>
**Arguments:**
- *x*: an {{Operand}}. The input tensor.

**Returns:** an {{Operand}}. The output tensor of the same shape as *x*.

Calculate the <a
href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">rectified
linear function</a> on the input tensor element-wise. The calculation
follows the expression `max(0, x)`.

<div class="note">
The behavior of this operation can be generically emulated from the usage of
other operations as follow. However, user agents typically have a more
efficient implementation for it, therefore its usage is encouraged from the
performance standpoint.
<pre highlight="js">
return builder.max(builder.constant(0), x);
</pre>
</div>
huningxin marked this conversation as resolved.
Show resolved Hide resolved
</div>

### resample ### {#api-modelbuilder-resample}
Resample the tensor values from the source to the destination dimensions according to the scaling factors.
<script type=idl>
Expand Down Expand Up @@ -1188,7 +1170,7 @@ partial interface ModelBuilder {
</div>

### reshape ### {#api-modelbuilder-reshape}
Reshapes a tensor to a given new shape.
Alter the shape of a tensor to a new shape. Reshape does not copy or change the content of the tensor. It just changes the tensor's logical dimensions for the subsequent operations.
<script type=idl>
partial interface ModelBuilder {
Operand reshape(Operand input, sequence<long> newShape);
Expand Down Expand Up @@ -1307,7 +1289,7 @@ partial interface ModelBuilder {
</div>

### squeeze ### {#api-modelbuilder-squeeze}
Reduce the rank of a tensor without affecting its values by eliminating dimensions with size 1 of the tensor shape.
Reduce the rank of a tensor by eliminating dimensions with size 1 of the tensor shape. Squeeze only affects the tensor's logical dimensions. It does not copy or change the content in the tensor.
<script type=idl>
dictionary SqueezeOptions {
sequence<long> axes;
Expand Down