Skip to content

Commit

Permalink
bulk change to unify **Required.** and **Optional.** (#6503)
Browse files Browse the repository at this point in the history
  • Loading branch information
pelszkow authored Jul 15, 2021
1 parent ebb3c85 commit 20f10a4
Show file tree
Hide file tree
Showing 109 changed files with 401 additions and 403 deletions.
2 changes: 1 addition & 1 deletion docs/ops/activation/Clamp_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ clamp( x_{i} )=\min\big( \max\left( x_{i}, min\_value \right), max\_value \big)

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/activation/Elu_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ where α corresponds to *alpha* attribute.

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/activation/Exp_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ exp(x) = e^{x}

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/activation/GELU_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Additionally, *Gelu* function may be approximated as follows:

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/activation/GELU_7.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ For `tanh` approximation mode, *Gelu* function is represented as:

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/activation/HSigmoid_5.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
**Short description**: HSigmoid takes one input tensor and produces output tensor where the hard version of sigmoid function is applied to the tensor elementwise.

**Detailed description**: For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
element in the output tensor with the following formula:

\f[
HSigmoid(x) = \frac{min(max(x + 3, 0), 6)}{6}
Expand Down Expand Up @@ -44,4 +44,4 @@ The HSigmoid operation is introduced in the following [article](https://arxiv.or
</port>
</output>
</layer>
```
```
4 changes: 2 additions & 2 deletions docs/ops/activation/HSwish_4.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
**Short description**: HSwish takes one input tensor and produces output tensor where the hard version of swish function is applied to the tensor elementwise.

**Detailed description**: For each element from the input tensor calculates corresponding
element in the output tensor with the following formula:
element in the output tensor with the following formula:

\f[
HSwish(x) = x \frac{min(max(x + 3, 0), 6)}{6}
Expand All @@ -19,7 +19,7 @@ The HSwish operation is introduced in the following [article](https://arxiv.org/

**Inputs**:

* **1**: Multidimensional input tensor of type *T*. **Required**.
* **1**: Multidimensional input tensor of type *T*. **Required.**

**Outputs**:

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/activation/LogSoftmax_5.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ LogSoftmax(x, axis) = t - Log(ReduceSum(Exp(t), axis))

**Inputs**:

* **1**: Input tensor *x* of type *T* with enough number of dimension to be compatible with *axis* attribute. Required.
* **1**: Input tensor *x* of type *T* with enough number of dimension to be compatible with *axis* attribute. **Required.**

**Outputs**:

Expand Down
2 changes: 1 addition & 1 deletion docs/ops/activation/Mish_4.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Mish(x) = x\cdot\tanh\big(SoftPlus(x)\big) = x\cdot\tanh\big(\ln(1+e^{x})\big)

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/activation/PReLU_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ PReLU(x) = \max(0, x) + \alpha\cdot\min(0, x)

**Inputs**

* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required**.
* **2**: `slope`. A tensor of type *T* and rank greater or equal to 1. Tensor with negative slope values. **Required**.
* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**
* **2**: `slope`. A tensor of type *T* and rank greater or equal to 1. Tensor with negative slope values. **Required.**
* **Note**: Channels dimension corresponds to the second dimension of `data` input tensor. If `slope` input rank is 1 and its dimension is equal to the second dimension of `data` input, then per channel broadcast is applied. Otherwise `slope` input is broadcasted with numpy rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).

**Outputs**
Expand Down
6 changes: 3 additions & 3 deletions docs/ops/activation/ReLU_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ For each element from the input tensor calculates corresponding

**Inputs**:

* **1**: Multidimensional input tensor *x* of any supported numeric type. Required.
* **1**: Multidimensional input tensor *x* of any supported numeric type. **Required.**

**Outputs**:

* **1**: Result of ReLU function applied to the input tensor *x*. Tensor with shape and type matching the input tensor. Required.
* **1**: Result of ReLU function applied to the input tensor *x*. Tensor with shape and type matching the input tensor.

**Example**

Expand All @@ -44,4 +44,4 @@ For each element from the input tensor calculates corresponding
</output>
</layer>

```
```
2 changes: 1 addition & 1 deletion docs/ops/activation/Sigmoid_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ sigmoid( x ) = \frac{1}{1+e^{-x}}

**Inputs**:

* **1**: Input tensor *x* of any floating point type. Required.
* **1**: Input tensor *x* of any floating point type. **Required.**

**Outputs**:

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/activation/SoftMax_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ where \f$C\f$ is a size of tensor along *axis* dimension.

**Inputs**:

* **1**: Input tensor with enough number of dimension to be compatible with *axis* attribute. Required.
* **1**: Input tensor with enough number of dimension to be compatible with *axis* attribute. **Required.**

**Outputs**:

Expand All @@ -41,4 +41,4 @@ where \f$C\f$ is a size of tensor along *axis* dimension.
<input> ... </input>
<output> ... </output>
</layer>
```
```
2 changes: 1 addition & 1 deletion docs/ops/activation/SoftPlus_4.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ For example, if *T* is `fp32`, `threshold` should be `20` or if *T* is `fp16`, `

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/activation/Swish_4.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ where β corresponds to `beta` scalar input.

**Inputs**:

* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required**.
* **1**: `data`. A tensor of type *T* and arbitrary shape. **Required.**

* **2**: `beta`. A non-negative scalar value of type *T*. Multiplication parameter for the sigmoid. Default value 1.0 is used. **Optional**.
* **2**: `beta`. A non-negative scalar value of type *T*. Multiplication parameter for the sigmoid. Default value 1.0 is used. **Optional.**

**Outputs**:

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/arithmetic/FloorMod_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ o_{i} = a_{i} % b_{i}

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type *T* and arbitrary shape. Required.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/arithmetic/Maximum_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ o_{i} = max(a_{i}, b_{i})

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type *T* and arbitrary shape. Required.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/arithmetic/Minimum_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ o_{i} = min(a_{i}, b_{i})

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type *T* and arbitrary shape. Required.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/arithmetic/Mod_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ o_{i} = a_{i} % b_{i}

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type *T* and arbitrary shape. Required.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
4 changes: 2 additions & 2 deletions docs/ops/arithmetic/Power_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ o_{i} = {a_{i} ^ b_{i}}

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type *T* and arbitrary shape. Required.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
8 changes: 4 additions & 4 deletions docs/ops/arithmetic/Round_5.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

**Versioned name**: *Round-5*

**Category**: Arithmetic unary operation
**Category**: Arithmetic unary operation

**Short description**: *Round* performs element-wise round operation with given tensor.

**Detailed description**: Operation takes one input tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value. In case of halves, the rule is to round them to the nearest even integer if `mode` attribute is `half_to_even` or rounding in such a way that the result heads away from zero if `mode` attribute is `half_away_from_zero`.

Input = [-4.5, -1.9, -1.5, 0.5, 0.9, 1.5, 2.3, 2.5]

round(Input, mode = `half_to_even`) = [-4.0, -2.0, -2.0, 0.0, 1.0, 2.0, 2.0, 2.0]

round(Input, mode = `half_away_from_zero`) = [-5.0, -2.0, -2.0, 1.0, 1.0, 2.0, 2.0, 3.0]

**Attributes**:
Expand Down
4 changes: 2 additions & 2 deletions docs/ops/arithmetic/SquaredDifference_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ o_{i} = (a_{i} - b_{i})^2

**Inputs**

* **1**: A tensor of type *T* and arbitrary shape. Required.
* **2**: A tensor of type *T* and arbitrary shape. Required.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

Expand Down
8 changes: 4 additions & 4 deletions docs/ops/condition/Bucketize_3.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ For example, if the first input tensor is `[[3, 50], [10, -1]]` and the second i
* **Range of values**: "i64" or "i32"
* **Type**: string
* **Default value**: "i64"
* **Required**: *No*
* **Required**: *no*

* *with_right_bound*

Expand All @@ -27,13 +27,13 @@ For example, if the first input tensor is `[[3, 50], [10, -1]]` and the second i
* true - bucket includes the right interval edge
* false - bucket includes the left interval edge
* **Type**: `boolean`
* **Default value**: true
* **Default value**: true
* **Required**: *no*

**Inputs**:

* **1**: N-D tensor of *T* type with elements for the bucketization. Required.
* **2**: 1-D tensor of *T_BOUNDARIES* type with sorted unique boundaries for buckets. Required.
* **1**: N-D tensor of *T* type with elements for the bucketization. **Required.**
* **2**: 1-D tensor of *T_BOUNDARIES* type with sorted unique boundaries for buckets. **Required.**

**Outputs**:

Expand Down
6 changes: 3 additions & 3 deletions docs/ops/condition/NonZero_3.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@
* **Range of values**: `i64` or `i32`
* **Type**: string
* **Default value**: "i64"
* **Required**: *No*
* **Required**: *no*

**Inputs**:

* **1**: A tensor of type *T* and arbitrary shape. **Required**.
* **1**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**:

Expand Down Expand Up @@ -57,4 +57,4 @@
</port>
</output>
</layer>
```
```
6 changes: 3 additions & 3 deletions docs/ops/convolution/BinaryConvolution_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Computation algorithm for mode *xnor-popcount*:
* *xnor-popcount*
* **Type**: `string`
* **Required**: *yes*
* **Note**: value `0` in inputs is interpreted as `-1`, value `1` as `1`
* **Note**: value `0` in inputs is interpreted as `-1`, value `1` as `1`

* *pad_value*

Expand All @@ -76,8 +76,8 @@ Computation algorithm for mode *xnor-popcount*:

**Inputs**:

* **1**: Input tensor of type *T1* and rank 4. Layout is `[N, C_IN, Y, X]` (number of batches, number of channels, spatial axes Y, X). Required.
* **2**: Kernel tensor of type *T2* and rank 4. Layout is `[C_OUT, C_IN, Y, X]` (number of output channels, number of input channels, spatial axes Y, X). Required.
* **1**: Input tensor of type *T1* and rank 4. Layout is `[N, C_IN, Y, X]` (number of batches, number of channels, spatial axes Y, X). **Required.**
* **2**: Kernel tensor of type *T2* and rank 4. Layout is `[C_OUT, C_IN, Y, X]` (number of output channels, number of input channels, spatial axes Y, X). **Required.**
* **Note**: Interpretation of tensor values is defined by *mode* attribute.

**Outputs**:
Expand Down
6 changes: 3 additions & 3 deletions docs/ops/convolution/ConvolutionBackpropData_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,11 +90,11 @@ else:

**Inputs**:

* **1**: Input tensor of type *T1* and rank 3, 4 or 5. Layout is `[N, C_INPUT, Z, Y, X]` (number of batches, number of input channels, spatial axes Z, Y, X). *Required*.
* **1**: Input tensor of type *T1* and rank 3, 4 or 5. Layout is `[N, C_INPUT, Z, Y, X]` (number of batches, number of input channels, spatial axes Z, Y, X). **Required.**

* **2**: Convolution kernel tensor of type *T1* and rank 3, 4 or 5. Layout is `[C_INPUT, C_OUTPUT, Z, Y, X]` (number of input channels, number of output channels, spatial axes Z, Y, X). Spatial size of the kernel is derived from the shape of this input and aren't specified by any attribute. *Required*.
* **2**: Convolution kernel tensor of type *T1* and rank 3, 4 or 5. Layout is `[C_INPUT, C_OUTPUT, Z, Y, X]` (number of input channels, number of output channels, spatial axes Z, Y, X). Spatial size of the kernel is derived from the shape of this input and aren't specified by any attribute. **Required.**

* **3**: `output_shape` is 1D tensor of type *T2* that specifies spatial shape of the output. If specified, *padding amount* is deduced from relation of input and output spatial shapes according to formulas in the description. If not specified, *output shape* is calculated based on the `pads_begin` and `pads_end` or completely according to `auto_pad`. *Optional*.
* **3**: `output_shape` is 1D tensor of type *T2* that specifies spatial shape of the output. If specified, *padding amount* is deduced from relation of input and output spatial shapes according to formulas in the description. If not specified, *output shape* is calculated based on the `pads_begin` and `pads_end` or completely according to `auto_pad`. **Optional.**
* **Note**: Type of the convolution (1D, 2D or 3D) is derived from the rank of the input tensors and not specified by any attribute:
* 1D convolution (input tensors rank 3) means that there is only one spatial axis X,
* 2D convolution (input tensors rank 4) means that there are two spatial axes Y, X,
Expand Down
4 changes: 2 additions & 2 deletions docs/ops/convolution/Convolution_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ The receptive field in each layer is calculated using the formulas:

**Inputs**:

* **1**: Input tensor of type *T* and rank 3, 4 or 5. Layout is `[N, C_IN, Z, Y, X]` (number of batches, number of channels, spatial axes Z, Y, X). Required.
* **2**: Kernel tensor of type *T* and rank 3, 4 or 5. Layout is `[C_OUT, C_IN, Z, Y, X]` (number of output channels, number of input channels, spatial axes Z, Y, X). Required.
* **1**: Input tensor of type *T* and rank 3, 4 or 5. Layout is `[N, C_IN, Z, Y, X]` (number of batches, number of channels, spatial axes Z, Y, X). **Required.**
* **2**: Kernel tensor of type *T* and rank 3, 4 or 5. Layout is `[C_OUT, C_IN, Z, Y, X]` (number of output channels, number of input channels, spatial axes Z, Y, X). **Required.**
* **Note**: Type of the convolution (1D, 2D or 3D) is derived from the rank of the input tensors and not specified by any attribute:
* 1D convolution (input tensors rank 3) means that there is only one spatial axis X
* 2D convolution (input tensors rank 4) means that there are two spatial axes Y, X
Expand Down
6 changes: 3 additions & 3 deletions docs/ops/convolution/DeformableConvolution_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,11 +91,11 @@ Where

**Inputs**:

* **1**: Input tensor of type *T* and rank 4. Layout is `NCYX` (number of batches, number of channels, spatial axes Y and X). Required.
* **1**: Input tensor of type *T* and rank 4. Layout is `NCYX` (number of batches, number of channels, spatial axes Y and X). **Required.**

* **2**: Offsets tensor of type *T* and rank 4. Layout is `NCYX` (number of batches, *deformable_group* \* kernel_Y \* kernel_X \* 2, spatial axes Y and X). Required.
* **2**: Offsets tensor of type *T* and rank 4. Layout is `NCYX` (number of batches, *deformable_group* \* kernel_Y \* kernel_X \* 2, spatial axes Y and X). **Required.**

* **3**: Kernel tensor of type *T* and rank 4. Layout is `OIYX` (number of output channels, number of input channels, spatial axes Y and X). Required.
* **3**: Kernel tensor of type *T* and rank 4. Layout is `OIYX` (number of output channels, number of input channels, spatial axes Y and X). **Required.**


**Outputs**:
Expand Down
6 changes: 3 additions & 3 deletions docs/ops/convolution/GroupConvolutionBackpropData_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,11 @@

**Inputs**:

* **1**: Input tensor of type `T1` and rank 3, 4 or 5. Layout is `[N, GROUPS * C_IN, Z, Y, X]` (number of batches, number of channels, spatial axes Z, Y, X). Required.
* **1**: Input tensor of type `T1` and rank 3, 4 or 5. Layout is `[N, GROUPS * C_IN, Z, Y, X]` (number of batches, number of channels, spatial axes Z, Y, X). **Required.**

* **2**: Kernel tensor of type `T1` and rank 4, 5 or 6. Layout is `[GROUPS, C_IN, C_OUT, X, Y, Z]` (number of groups, number of input channels, number of output channels, spatial axes X, Y, Z). Required.
* **2**: Kernel tensor of type `T1` and rank 4, 5 or 6. Layout is `[GROUPS, C_IN, C_OUT, X, Y, Z]` (number of groups, number of input channels, number of output channels, spatial axes X, Y, Z). **Required.**

* **3**: Output shape tensor of type `T2` and rank 1. It specifies spatial shape of the output. Optional.
* **3**: Output shape tensor of type `T2` and rank 1. It specifies spatial shape of the output. **Optional.**
* **Note** Number of groups is derived from the shape of the kernel and not specified by any attribute.
* **Note**: Type of the convolution (1D, 2D or 3D) is derived from the rank of the input tensors and not specified by any attribute:
* 1D convolution (input tensors rank 3) means that there is only one spatial axis X
Expand Down
2 changes: 1 addition & 1 deletion docs/ops/convolution/GroupConvolution_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Neural Networks](https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76

**Inputs**:

* **1**: Input tensor of type *T* and rank 3, 4 or 5. Layout is `[N, GROUPS * C_IN, Z, Y, X]` (number of batches, number of channels, spatial axes Z, Y, X). Required.
* **1**: Input tensor of type *T* and rank 3, 4 or 5. Layout is `[N, GROUPS * C_IN, Z, Y, X]` (number of batches, number of channels, spatial axes Z, Y, X). **Required.**
* **2**: Convolution kernel tensor of type *T* and rank 4, 5 or 6. Layout is `[GROUPS, C_OUT, C_IN, Z, Y, X]` (number of groups, number of output channels, number of input channels, spatial axes Z, Y, X),
* **Note** Number of groups is derived from the shape of the kernel and not specified by any attribute.
* **Note**: Type of the convolution (1D, 2D or 3D) is derived from the rank of the input tensors and not specified by any attribute:
Expand Down
Loading

0 comments on commit 20f10a4

Please sign in to comment.