diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-15.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-15.rst
new file mode 100644
index 00000000000000..9f0392c8e2d038
--- /dev/null
+++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-15.rst
@@ -0,0 +1,184 @@
+.. {#openvino_docs_ops_sparse_EmbeddingBagOffsets_15}
+
+EmbeddingBagOffsets
+======================
+
+
+.. meta::
+ :description: Learn about EmbeddingBagOffsets-15 - a sparse operation, which
+ can be performed on three required and two optional input tensors.
+
+**Versioned name**: *EmbeddingBagOffsets-15*
+
+**Category**: *Sparse*
+
+**Short description**: Computes sums or means of "bags" of embeddings, without instantiating the intermediate embeddings.
+
+**Detailed description**:
+
+Operation EmbeddingBagOffsets is an implementation of ``torch.nn.EmbeddingBag`` with indices and offsets inputs being 1D tensors.
+
+For each index in ``indices`` this operator gathers values from ``emb_table`` embedding table. Then values at indices in the range of the same bag (based on ``offset`` input) are reduced according to ``reduction`` attribute.
+
+Values in ``offsets`` define starting index in ``indices`` tensor of each "bag",
+e.g. ``offsets`` with value ``[0, 3, 4, 4, 6]`` define 5 "bags" containing ``[3, 1, 0, 2, num_indices-6]`` elements corresponding to ``[indices[0:3], indices[3:4], empty_bag, indices[4:6], indices[6:]]`` slices of indices per bag.
+
+EmbeddingBagOffsets is an equivalent to following NumPy snippet:
+
+.. code-block:: py
+
+ def embedding_bag_offsets(
+ emb_table: np.ndarray,
+ indices: np.ndarray,
+ offsets: np.ndarray,
+ default_index: Optional[int] = None,
+ per_sample_weights: Optional[np.ndarray] = None,
+ reduction: Literal["sum", "mean"] = "sum",
+ ):
+ assert (
+ reduction == "sum" or per_sample_weights is None
+ ), "Attribute per_sample_weights is only supported in sum reduction."
+ if per_sample_weights is None:
+ per_sample_weights = np.ones_like(indices)
+ embeddings = []
+ for emb_idx, emb_weight in zip(indices, per_sample_weights):
+ embeddings.append(emb_table[emb_idx] * emb_weight)
+ previous_offset = offsets[0]
+ bags = []
+ offsets = np.append(offsets, len(indices))
+ for bag_offset in offsets[1:]:
+ bag_size = bag_offset - previous_offset
+ if bag_size != 0:
+ embedding_bag = embeddings[previous_offset:bag_offset]
+ reduced_bag = np.add.reduce(embedding_bag)
+ if reduction == "mean":
+ reduced_bag = reduced_bag / bag_size
+ bags.append(reduced_bag)
+ else:
+ # Empty bag case
+ if default_index is not None and default_index != -1:
+ bags.append(emb_table[default_index])
+ else:
+ bags.append(np.zeros(emb_table.shape[1:]))
+ previous_offset = bag_offset
+ return np.stack(bags, axis=0)
+
+
+**Attributes**:
+
+* *reduction*
+
+ * **Description**: reduction mode.
+ * **Range of values**:
+
+ * sum - compute weighted sum, using corresponding values of ``per_sample_weights`` as weights if provided.
+ * mean - compute average of values in bag. Input ``per_sample_weights`` is not supported and will raise exception.
+
+ * **Type**: ``string``
+ * **Default value**: sum
+ * **Required**: *no*
+
+**Inputs**:
+
+* **1**: ``emb_table`` tensor containing the embedding lookup table of the module of shape ``[num_emb, emb_dim1, emb_dim2, ...]`` and of type *T*. **Required.**
+* **2**: ``indices`` tensor of shape ``[num_indices]`` and of type *T_IND*. **Required.**
+* **3**: ``offsets`` tensor of shape ``[batch]`` and of type *T_IND* containing the starting index positions of each "bag" in ``indices``. Maximum value of offsets cannot be greater than length of ``indices``. **Required.**
+* **4**: ``default_index`` scalar of type *T_IND* containing default index in embedding table to fill empty "bags". If set to ``-1`` or not provided, empty "bags" are filled with zeros. Reverse indexing using negative values is not supported. **Optional.**
+* **5**: ``per_sample_weights`` tensor of the same shape as ``indices`` and of type *T*. Supported only when *reduction* attribute is set to ``"sum"``. Each value in this tensor are multiplied with each value pooled from embedding table for each index. Optional, default is tensor of ones. **Optional.**
+
+**Outputs**:
+
+* **1**: tensor of shape ``[batch, emb_dim1, emb_dim2, ...]`` and of type *T* containing embeddings for each bag.
+
+**Types**
+
+* *T*: any numeric type.
+* *T_IND*: ``int32`` or ``int64``.
+
+**Example**
+
+*Example 1: per_sample_weights are provided, default_index is set to 0 to fill empty bag with values gathered form emb_table on given index.*
+
+.. code-block:: xml
+
+
+
+
+
+ 5
+ 2
+
+
+ 4
+
+
+ 3
+
+
+
+ 4
+
+
+
+
+
+*Example 2: per_sample_weights are provided, default_index is set to -1 to fill empty bag with 0.*
+
+.. code-block:: xml
+
+
+
+
+
+ 5
+ 2
+
+
+ 4
+
+
+ 3
+
+
+
+ 4
+
+
+
+
+
+*Example 3: Example of reduction set to mean.*
+
+.. code-block:: xml
+
+
+
+
+
+ 5
+ 2
+
+
+ 4
+
+
+ 3
+
+
+
+
diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-sum-3.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-sum-3.rst
index 9e3bd9d678b7bf..c3eb163b16d98f 100644
--- a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-sum-3.rst
+++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-offsets-sum-3.rst
@@ -14,7 +14,48 @@ EmbeddingBagOffsetsSum
**Short description**: Computes sums of "bags" of embeddings, without instantiating the intermediate embeddings.
-**Detailed description**: This is the second case of the PyTorch `EmbeddingBag `__ , it has indices in two 1D tensors provided as 2nd and 3rd inputs. For each index in ``indices`` this operator gets values from ``data`` embedding table and sums all values belonging to each bag. Values in ``offsets`` define starting index in ``indices`` tensor of each "bag", e.g. ``offsets`` with value ``[0,3,4,4,6]`` define 5 "bags" containing ``[3,1,0,2,n-6]`` elements.
+**Detailed description**:
+
+Operation EmbeddingBagOffsets is an implementation of ``torch.nn.EmbeddingBag`` with indices and offsets inputs being 1D tensors.
+
+For each index in ``indices`` this operator gathers values from ``emb_table`` embedding table. Then values at indices in the range of the same bag (based on ``offset`` input) are reduced according to ``reduction`` attribute.
+
+Values in ``offsets`` define starting index in ``indices`` tensor of each "bag",
+e.g. ``offsets`` with value ``[0, 3, 4, 4, 6]`` define 5 "bags" containing ``[3, 1, 0, 2, num_indices-6]`` elements corresponding to ``[indices[0:3], indices[3:4], empty_bag, indices[4:6], indices[6:]]`` slices of indices per bag.
+
+EmbeddingBagOffsetsSum is an equivalent to following NumPy snippet:
+
+.. code-block:: py
+
+ def embedding_bag_offsets(
+ emb_table: np.ndarray,
+ indices: np.ndarray,
+ offsets: np.ndarray,
+ default_index: Optional[int] = None,
+ per_sample_weights: Optional[np.ndarray] = None,
+ ):
+ if per_sample_weights is None:
+ per_sample_weights = np.ones_like(indices)
+ embeddings = []
+ for emb_idx, emb_weight in zip(indices, per_sample_weights):
+ embeddings.append(emb_table[emb_idx] * emb_weight)
+ previous_offset = offsets[0]
+ bags = []
+ offsets = np.append(offsets, len(indices))
+ for bag_offset in offsets[1:]:
+ bag_size = bag_offset - previous_offset
+ if bag_size != 0:
+ embedding_bag = embeddings[previous_offset:bag_offset]
+ reduced_bag = np.add.reduce(embedding_bag)
+ bags.append(reduced_bag)
+ else:
+ # Empty bag case
+ if default_index is not None and default_index != -1:
+ bags.append(emb_table[default_index])
+ else:
+ bags.append(np.zeros(emb_table.shape[1:]))
+ previous_offset = bag_offset
+ return np.stack(bags, axis=0)
**Attributes**: EmbeddingBagOffsetsSum operation has no attributes.
@@ -22,7 +63,7 @@ EmbeddingBagOffsetsSum
* **1**: ``emb_table`` tensor containing the embedding lookup table of the module of shape ``[num_emb, emb_dim1, emb_dim2, ...]`` and of type *T*. **Required.**
* **2**: ``indices`` tensor of shape ``[num_indices]`` and of type *T_IND*. **Required.**
-* **3**: ``offsets`` tensor of shape ``[batch]`` and of type *T_IND* containing the starting index positions of each "bag" in ``indices``. **Required.**
+* **3**: ``offsets`` tensor of shape ``[batch]`` and of type *T_IND* containing the starting index positions of each "bag" in ``indices``. Maximum value of offsets cannot be greater than length of ``indices``. **Required.**
* **4**: ``default_index`` scalar of type *T_IND* containing default index in embedding table to fill empty "bags". If set to ``-1`` or not provided, empty "bags" are filled with zeros. Reverse indexing using negative values is not supported. **Optional.**
* **5**: ``per_sample_weights`` tensor of the same shape as ``indices`` and of type *T*. Each value in this tensor are multiplied with each value pooled from embedding table for each index. Optional, default is tensor of ones. **Optional.**
@@ -37,7 +78,9 @@ EmbeddingBagOffsetsSum
**Example**
-.. code-block:: cpp
+*Example 1: per_sample_weights are provided, default_index is set to 0 to fill empty bag with values gathered form emb_table on given index.*
+
+.. code-block:: xml
@@ -52,7 +95,7 @@ EmbeddingBagOffsetsSum
3
-
+ 4
@@ -64,4 +107,31 @@ EmbeddingBagOffsetsSum
+*Example 2: per_sample_weights are provided, default_index is set to -1 to fill empty bag with 0.*
+
+.. code-block:: xml
+
+
+
+ 5
+ 2
+
+
+ 4
+
+
+ 3
+
+
+
+ 4
+
+
+
+
diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-15.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-15.rst
new file mode 100644
index 00000000000000..2892d49759f667
--- /dev/null
+++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-15.rst
@@ -0,0 +1,131 @@
+.. {#openvino_docs_ops_sparse_EmbeddingBagPacked_15}
+
+EmbeddingBagPacked
+=====================
+
+
+.. meta::
+ :description: Learn about EmbeddingBagPacked-15 - a sparse operation, which
+ can be performed on two required and one optional input tensor.
+
+**Versioned name**: *EmbeddingBagPacked-15*
+
+**Category**: *Sparse*
+
+**Short description**: Computes sums or means of "bags" of embeddings, without instantiating the intermediate embeddings.
+
+**Detailed description**:
+
+Operation EmbeddingBagPacked is an implementation of ``torch.nn.EmbeddingBag`` with indices input being 2D tensor of shape ``[batch, indices_per_bag]``.
+Operation is equivalent to *gather_op = Gather(emb_table, indices, axis=0)* followed by reduction:
+
+ * *sum* - *ReduceSum(Multiply(gather_op, Unsqueeze(per_sample_weights, -1)), axis=1)*,
+ * *mean* - *ReduceMean(gather_op, axis=1)*.
+
+**Attributes**:
+
+* *reduction*
+
+ * **Description**: reduction mode.
+ * **Range of values**:
+
+ * sum - compute weighted sum, using corresponding values of ``per_sample_weights`` as weights if provided.
+ * mean - compute average of values in bag. Input ``per_sample_weights`` is not supported and will raise exception.
+
+ * **Type**: ``string``
+ * **Default value**: sum
+ * **Required**: *no*
+
+**Inputs**:
+
+* **1**: ``emb_table`` tensor containing the embedding lookup table of the module of shape ``[num_emb, emb_dim1, emb_dim2, ...]`` and of type *T*. **Required.**
+* **2**: ``indices`` tensor of shape ``[batch, indices_per_bag]`` and of type *T_IND*. **Required.**
+* **3**: ``per_sample_weights`` tensor of the same shape as ``indices`` and of type *T* supported only in ``sum`` mode. Each value in this tensor are multiplied with each value pooled from embedding table for each index. Optional, default is tensor of ones. **Optional.**
+
+**Outputs**:
+
+* **1**: tensor of shape ``[batch, emb_dim1, emb_dim2, ...]`` and of type *T* containing embeddings for each bag.
+
+**Types**
+
+* *T*: any numeric type.
+* *T_IND*: ``int32`` or ``int64``.
+
+**Example**
+
+*Example 1: reduction set to sum, per_sample_weights are not provided.*
+
+.. code-block:: xml
+
+
+
+
+
+ 5
+ 2
+
+
+ 3
+ 2
+
+
+
+
+
+*Example 2: reduction set to sum and per_sample_weights are provided.*
+
+.. code-block:: xml
+
+
+
+
+
+ 5
+ 2
+
+
+ 3
+ 2
+
+
+ 3
+ 2
+
+
+
+
+
+*Example 3: reduction set to mean, per_sample_weights are not provided.*
+
+.. code-block:: xml
+
+
+
+
+
+ 5
+ 2
+
+
+ 3
+ 2
+
+
+
+
+
diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-sum-3.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-sum-3.rst
index 9ef623ca7755eb..b6cad12be869ac 100644
--- a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-sum-3.rst
+++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/sparse/embedding-bag-packed-sum-3.rst
@@ -14,7 +14,10 @@ EmbeddingBagPackedSum
**Short description**: Computes sums of "bags" of embeddings, without instantiating the intermediate embeddings.
-**Detailed description**: This is the first case of the PyTorch `EmbeddingBag `__ , it has indices in the tensor of format ``[batch, indices_per_bag]``. If 3rd input is not provided, this operation is equivalent to *Gather* followed by *ReduceSum(axis=0)*. However, *EmbeddingBagPackedSum* is much more time and memory efficient than using a chain of these operations.
+**Detailed description**:
+
+Operation EmbeddingBagPackedSum is an implementation of ``torch.nn.EmbeddingBag`` in ``sum`` mode, which indices input being 2D tensor of shape ``[batch, indices_per_bag]``.
+Operation is equivalent to *ReduceSum(Multiply(Gather(emb_table, indices, axis=0), Unsqueeze(per_sample_weights, -1)), axis=1)*.
**Attributes**: EmbeddingBagPackedSum operation has no attributes.
@@ -35,7 +38,7 @@ EmbeddingBagPackedSum
**Example**
-.. code-block:: cpp
+.. code-block:: xml
@@ -47,13 +50,13 @@ EmbeddingBagPackedSum
32
-
+ 32