-
Notifications
You must be signed in to change notification settings - Fork 79
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit e735a29
Showing
339 changed files
with
452,289 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: 9ecdd5bdfa49902cafe721abdf6f5695 | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Empty file.
320 changes: 320 additions & 0 deletions
320
_downloads/11a23e1bafc489334eaf020bf977a5bb/tensordict_shapes.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
419 changes: 419 additions & 0 deletions
419
_downloads/15e3378cb4356a25ae780439b1e18899/tensorclass_imagenet.py
Large diffs are not rendered by default.
Oops, something went wrong.
194 changes: 194 additions & 0 deletions
194
_downloads/245c7aad3c5052fda391900ac51962d9/tensordict_slicing.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,194 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"\n# Slicing, Indexing, and Masking\n**Author**: [Tom Begley](https://github.com/tcbegley)\n\nIn this tutorial you will learn how to slice, index, and mask a :class:`~.TensorDict`.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"As discussed in the tutorial\n[Manipulating the shape of a TensorDict](./tensordict_shapes.html), when we create a\n:class:`~.TensorDict` we specify a ``batch_size``, which must agree\nwith the leading dimensions of all entries in the :class:`~.TensorDict`. Since we have\na guarantee that all entries share those dimensions in common, we are able to index\nand mask the batch dimensions in the same way that we would index a\n:class:`torch.Tensor`. The indices are applied along the batch dimensions to all of\nthe entries in the :class:`~.TensorDict`.\n\nFor example, given a :class:`~.TensorDict` with two batch dimensions,\n``tensordict[0]`` returns a new :class:`~.TensorDict` with the same structure, and\nwhose values correspond to the first \"row\" of each entry in the original\n:class:`~.TensorDict`.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import torch\nfrom tensordict import TensorDict\n\ntensordict = TensorDict(\n {\"a\": torch.zeros(3, 4, 5), \"b\": torch.zeros(3, 4)}, batch_size=[3, 4]\n)\n\nprint(tensordict[0])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The same syntax applies as for regular tensors. For example if we wanted to drop the\nfirst row of each entry we could index as follows\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"print(tensordict[1:])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"We can index multiple dimensions simultaneously\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"print(tensordict[:, 2:])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"We can also use ``Ellipsis`` to represent as many ``:`` as would be needed to make\nthe selection tuple the same length as ``tensordict.batch_dims``.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"print(tensordict[..., 2:])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
".. note:\n\n Remember that all indexing is applied relative to the batch dimensions. In the\n above example there is a difference between ``tensordict[\"a\"][..., 2:]`` and\n ``tensordict[..., 2:][\"a\"]``. The first retrieves the three-dimensional tensor\n stored under the key ``\"a\"`` and applies the index ``2:`` to the final dimension.\n The second applies the index ``2:`` to the final *batch dimension*, which is the\n second dimension, before retrieving the result.\n\n## Setting Values with Indexing\nIn general, ``tensordict[index] = new_tensordict`` will work as long as the batch\nsizes are compatible.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"tensordict = TensorDict(\n {\"a\": torch.zeros(3, 4, 5), \"b\": torch.zeros(3, 4)}, batch_size=[3, 4]\n)\n\ntd2 = TensorDict({\"a\": torch.ones(2, 4, 5), \"b\": torch.ones(2, 4)}, batch_size=[2, 4])\ntensordict[:-1] = td2\nprint(tensordict[\"a\"], tensordict[\"b\"])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Masking\nWe mask :class:`TensorDict` as we mask tensors.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"mask = torch.BoolTensor([[1, 0, 1, 0], [1, 0, 1, 0], [1, 0, 1, 0]])\ntensordict[mask]" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## SubTensorDict\nWhen we index a :class:`~.TensorDict` with a contiguous index, we obtain a new\n:class:`~.TensorDict` whose values are all views on the values of the original\n:class:`~.TensorDict`. That means updates to the indexed :class:`~.TensorDict` are\napplied to the original also.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"tensordict = TensorDict(\n {\"a\": torch.zeros(3, 4, 5), \"b\": torch.zeros(3, 4)}, batch_size=[3, 4]\n)\ntd2 = tensordict[1:]\ntd2.fill_(\"b\", 1)\n\nassert (tensordict[\"b\"][1:] == 1).all()\nprint(tensordict[\"b\"])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"This doesn't work however if we use a non-contiguous index\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"tensordict = TensorDict(\n {\"a\": torch.zeros(3, 4, 5), \"b\": torch.zeros(3, 4)}, batch_size=[3, 4]\n)\ntd2 = tensordict[[0, 2]]\ntd2.fill_(\"b\", 1)\n\nassert (tensordict == 0).all()\nprint(tensordict[\"b\"])" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In case such functionality is needed, one can use\n:meth:`TensorDict.get_sub_tensordict <tensordict.TensorDict.get_sub_tensordict>`\ninstead. The :class:`~.SubTensorDict` holds a reference to the orgiinal\n:class:`~.TensorDict` so that updates to the sub-tensordict can be written back to the\nsource.\n\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"tensordict = TensorDict(\n {\"a\": torch.zeros(3, 4, 5), \"b\": torch.zeros(3, 4)}, batch_size=[3, 4]\n)\ntd2 = tensordict.get_sub_tensordict(([0, 2],))\ntd2.fill_(\"b\", 1)\nprint(tensordict[\"b\"])" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.8.18" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 0 | ||
} |
123 changes: 123 additions & 0 deletions
123
_downloads/2ee47d503f16d1bb0c7d67b4eecde9cc/tensordict_slicing.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,123 @@ | ||
# -*- coding: utf-8 -*- | ||
""" | ||
Slicing, Indexing, and Masking | ||
============================== | ||
**Author**: `Tom Begley <https://github.com/tcbegley>`_ | ||
In this tutorial you will learn how to slice, index, and mask a :class:`~.TensorDict`. | ||
""" | ||
|
||
############################################################################## | ||
# As discussed in the tutorial | ||
# `Manipulating the shape of a TensorDict <./tensordict_shapes.html>`_, when we create a | ||
# :class:`~.TensorDict` we specify a ``batch_size``, which must agree | ||
# with the leading dimensions of all entries in the :class:`~.TensorDict`. Since we have | ||
# a guarantee that all entries share those dimensions in common, we are able to index | ||
# and mask the batch dimensions in the same way that we would index a | ||
# :class:`torch.Tensor`. The indices are applied along the batch dimensions to all of | ||
# the entries in the :class:`~.TensorDict`. | ||
# | ||
# For example, given a :class:`~.TensorDict` with two batch dimensions, | ||
# ``tensordict[0]`` returns a new :class:`~.TensorDict` with the same structure, and | ||
# whose values correspond to the first "row" of each entry in the original | ||
# :class:`~.TensorDict`. | ||
|
||
import torch | ||
from tensordict import TensorDict | ||
|
||
tensordict = TensorDict( | ||
{"a": torch.zeros(3, 4, 5), "b": torch.zeros(3, 4)}, batch_size=[3, 4] | ||
) | ||
|
||
print(tensordict[0]) | ||
|
||
############################################################################## | ||
# The same syntax applies as for regular tensors. For example if we wanted to drop the | ||
# first row of each entry we could index as follows | ||
|
||
print(tensordict[1:]) | ||
|
||
############################################################################## | ||
# We can index multiple dimensions simultaneously | ||
|
||
print(tensordict[:, 2:]) | ||
|
||
############################################################################## | ||
# We can also use ``Ellipsis`` to represent as many ``:`` as would be needed to make | ||
# the selection tuple the same length as ``tensordict.batch_dims``. | ||
|
||
print(tensordict[..., 2:]) | ||
|
||
############################################################################## | ||
# .. note: | ||
# | ||
# Remember that all indexing is applied relative to the batch dimensions. In the | ||
# above example there is a difference between ``tensordict["a"][..., 2:]`` and | ||
# ``tensordict[..., 2:]["a"]``. The first retrieves the three-dimensional tensor | ||
# stored under the key ``"a"`` and applies the index ``2:`` to the final dimension. | ||
# The second applies the index ``2:`` to the final *batch dimension*, which is the | ||
# second dimension, before retrieving the result. | ||
# | ||
# Setting Values with Indexing | ||
# ---------------------------- | ||
# In general, ``tensordict[index] = new_tensordict`` will work as long as the batch | ||
# sizes are compatible. | ||
|
||
tensordict = TensorDict( | ||
{"a": torch.zeros(3, 4, 5), "b": torch.zeros(3, 4)}, batch_size=[3, 4] | ||
) | ||
|
||
td2 = TensorDict({"a": torch.ones(2, 4, 5), "b": torch.ones(2, 4)}, batch_size=[2, 4]) | ||
tensordict[:-1] = td2 | ||
print(tensordict["a"], tensordict["b"]) | ||
|
||
############################################################################## | ||
# Masking | ||
# ------- | ||
# We mask :class:`TensorDict` as we mask tensors. | ||
|
||
mask = torch.BoolTensor([[1, 0, 1, 0], [1, 0, 1, 0], [1, 0, 1, 0]]) | ||
tensordict[mask] | ||
|
||
############################################################################## | ||
# SubTensorDict | ||
# ------------- | ||
# When we index a :class:`~.TensorDict` with a contiguous index, we obtain a new | ||
# :class:`~.TensorDict` whose values are all views on the values of the original | ||
# :class:`~.TensorDict`. That means updates to the indexed :class:`~.TensorDict` are | ||
# applied to the original also. | ||
|
||
tensordict = TensorDict( | ||
{"a": torch.zeros(3, 4, 5), "b": torch.zeros(3, 4)}, batch_size=[3, 4] | ||
) | ||
td2 = tensordict[1:] | ||
td2.fill_("b", 1) | ||
|
||
assert (tensordict["b"][1:] == 1).all() | ||
print(tensordict["b"]) | ||
|
||
############################################################################## | ||
# This doesn't work however if we use a non-contiguous index | ||
|
||
tensordict = TensorDict( | ||
{"a": torch.zeros(3, 4, 5), "b": torch.zeros(3, 4)}, batch_size=[3, 4] | ||
) | ||
td2 = tensordict[[0, 2]] | ||
td2.fill_("b", 1) | ||
|
||
assert (tensordict == 0).all() | ||
print(tensordict["b"]) | ||
|
||
############################################################################## | ||
# In case such functionality is needed, one can use | ||
# :meth:`TensorDict.get_sub_tensordict <tensordict.TensorDict.get_sub_tensordict>` | ||
# instead. The :class:`~.SubTensorDict` holds a reference to the orgiinal | ||
# :class:`~.TensorDict` so that updates to the sub-tensordict can be written back to the | ||
# source. | ||
|
||
tensordict = TensorDict( | ||
{"a": torch.zeros(3, 4, 5), "b": torch.zeros(3, 4)}, batch_size=[3, 4] | ||
) | ||
td2 = tensordict.get_sub_tensordict(([0, 2],)) | ||
td2.fill_("b", 1) | ||
print(tensordict["b"]) |
Binary file not shown.
Oops, something went wrong.