Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing 3D dilation #408

Closed
wants to merge 1 commit into from
Closed

Conversation

shatayu
Copy link

@shatayu shatayu commented Apr 6, 2022

Summary:
Implemented 3D dilation for values that aren't (1, 1, 1) as requested by this issue.

The Algorithm

Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel. Removing extraneous points is done by calculating which rows/columns/etc. would have had zeros on them with the dilated kernel and removing them.

Dilation

The dilated kernel size (for one dimension) is kernel_size + (kernel_size - 1) * (dilation - 1). This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let x represent the units originally in the kernel and o represent the units introduced via dilation.

xooxoox

There are kernel_size - 1 places to insert units from dilation and we want to insert dilation - 1 units in each place. This leads to (kernel_size - 1) * (dilation - 1) total units from dilation and kernel_size original units, making a total kernel_size + (kernel_size - 1) * (dilation - 1) units.

Differential Revision: D35381703

@facebook-github-bot facebook-github-bot added CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported labels Apr 6, 2022
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

shatayu pushed a commit to shatayu/opacus that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Differential Revision: D35381703

fbshipit-source-id: accce169b5d79a12015e3d2d56e279217e29dfa8
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

@shatayu shatayu force-pushed the export-D35381703 branch from ac099e9 to 0bfe008 Compare April 6, 2022 17:43
shatayu pushed a commit to shatayu/opacus that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Differential Revision: D35381703

fbshipit-source-id: a40456d98df5a7c6007d2d6c7d840c4173fde7bf
@shatayu shatayu force-pushed the export-D35381703 branch from 0bfe008 to a29184c Compare April 6, 2022 17:47
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

shatayu pushed a commit to shatayu/opacus that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Differential Revision: D35381703

fbshipit-source-id: d917e2e459d2bb00bd7a4b4d9b67abfaea780efb
@shatayu shatayu force-pushed the export-D35381703 branch from a29184c to 2612bd6 Compare April 6, 2022 19:00
@karthikprasad karthikprasad self-requested a review April 6, 2022 19:09
@karthikprasad karthikprasad added this to the 1.1.1 milestone Apr 6, 2022
Copy link
Contributor

@karthikprasad karthikprasad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fantastic! The PR summary is fabulous as well and made code review very easy; thanks a lot.

shatayu pushed a commit to shatayu/opacus that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Reviewed By: karthikprasad

Differential Revision: D35381703

fbshipit-source-id: b9ebe9d124a1ffc88c2838681dc4f899efbdf973
@shatayu shatayu force-pushed the export-D35381703 branch from 2612bd6 to 6ee4170 Compare April 6, 2022 20:10
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

shatayu pushed a commit to shatayu/opacus that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Reviewed By: karthikprasad

Differential Revision: D35381703

fbshipit-source-id: 8ad87dae95a91d954600cdd75aaf26f1e7039dde
@shatayu shatayu force-pushed the export-D35381703 branch from 6ee4170 to 50cabf3 Compare April 6, 2022 20:14
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

shatayu pushed a commit to shatayu/opacus that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Reviewed By: karthikprasad

Differential Revision: D35381703

fbshipit-source-id: 762850a9c0e5fcb0f57cc3ce1e0323a2c0bded40
@shatayu shatayu force-pushed the export-D35381703 branch from 50cabf3 to 49ad93f Compare April 6, 2022 20:18
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

Summary:
Pull Request resolved: pytorch#408

Implemented 3D dilation for values that aren't `(1, 1, 1)` as requested by [this issue](pytorch#182).

## The Algorithm
Algorithm works by dilating the kernel (see below) and then removing extraneous points after the tensor has been unfolded with the dilated kernel.  Removing extraneous points is done by calculating which points were from the original kernel (i.e. not introduced by dilation) and removing all of the others.

## Dilation
The dilated kernel size (for one dimension) is `kernel_size + (kernel_size - 1) * (dilation - 1)`. This is best explained through visualization (in 1D). Suppose we have a kernel that is 3 units long and we wish to dilate it by 3. Let `x` represent the units originally in the kernel and `o` represent the units introduced via dilation.

```
xooxoox
```

There are `kernel_size - 1` places to insert units from dilation and we want to insert `dilation - 1` units in each place. This leads to `(kernel_size - 1) * (dilation - 1)` total units from dilation and `kernel_size` original units, making a total `kernel_size + (kernel_size - 1) * (dilation - 1)` units.

Reviewed By: karthikprasad

Differential Revision: D35381703

fbshipit-source-id: 212222a57cfd76abe1a3cf11d60c97a9823e9809
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D35381703

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants