Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARROW-2917: [Python] Use detach() to avoid PyTorch gradient errors #2311

Closed
wants to merge 2 commits into from

Conversation

alok
Copy link
Contributor

@alok alok commented Jul 23, 2018

detach() doesn't copy data unless it has to and will give a RuntimeError if the detached data needs to have its gradient calculated.

`detach()` doesn't copy data unless it has to and will give a RuntimeError if the detached data needs to have its gradient calculated.
@kou
Copy link
Member

kou commented Jul 24, 2018

Thanks for your contribution.
Can you open a JIRA ticket and prepend ARROW-XXX: to the title?
https://issues.apache.org/jira/browse/ARROW

https://github.com/apache/arrow/blob/master/.github/CONTRIBUTING.md#how-to-contribute-patches

To contribute a patch:

  1. Break your work into small, single-purpose patches if possible. It’s much harder to merge in a large change with a lot of disjoint features.
  2. Create a JIRA for your patch on the Arrow Project JIRA.
  3. Submit the patch as a GitHub pull request against the master branch. For a tutorial, see the GitHub guides on forking a repo and sending a pull request. Prefix your pull request name with the JIRA name (ex: ARROW-423: Define BUILD_BYPRODUCTS for CMake 3.2+ #240).
  4. Make sure that your code passes the unit tests. You can find instructions how to run the unit tests for each Arrow component in its respective README file.
  5. Add new unit tests for your code.

@codecov-io
Copy link

Codecov Report

Merging #2311 into master will increase coverage by 2.38%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2311      +/-   ##
==========================================
+ Coverage   84.38%   86.76%   +2.38%     
==========================================
  Files         293      237      -56     
  Lines       44786    42045    -2741     
==========================================
- Hits        37791    36482    -1309     
+ Misses       6964     5563    -1401     
+ Partials       31        0      -31
Impacted Files Coverage Δ
python/pyarrow/serialization.py 81.39% <0%> (ø) ⬆️
go/arrow/array/numeric.gen.go
go/arrow/array/numericbuilder.gen.go
go/arrow/type_traits_boolean.go
go/arrow/memory/memory_sse4_amd64.go
go/arrow/math/int64.go
rust/src/memory_pool.rs
go/arrow/datatype_binary.go
go/arrow/array/booleanbuilder.go
go/arrow/math/float64_sse4_amd64.go
... and 47 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update eaa6053...f8e298f. Read the comment docs.

@alok
Copy link
Contributor Author

alok commented Jul 24, 2018

@kou I'd rather not make a JIRA account for a one word change. Would you mind adding the patch? It does pass the tests.

@wesm
Copy link
Member

wesm commented Jul 24, 2018

Is there any way to trigger the problem that this fixes? Having an ASF JIRA account is a good idea; at some point you may find yourself contributing a larger patch to an Apache project.

@wesm wesm changed the title Use detach() to avoid torch gradient errors [Python] Use detach() to avoid PyTorch gradient errors Jul 24, 2018
@alok
Copy link
Contributor Author

alok commented Jul 26, 2018

@wesm Here's a script that triggers it for me: https://gist.github.com/c683d79cbc4b5c51282df983d81856ae

@alok alok changed the title [Python] Use detach() to avoid PyTorch gradient errors ARROW-2917: [Python] Use detach() to avoid PyTorch gradient errors Jul 26, 2018
@alok
Copy link
Contributor Author

alok commented Jul 26, 2018

@wesm @kou Created JIRA issue and updated title.

…s on master

Change-Id: I48a985a9661949f852d8d7951564b742682b8c2b
@wesm
Copy link
Member

wesm commented Jul 27, 2018

I added a unit test; all you had to do was try serializing a tensor with requires_grad=True.

Copy link
Member

@wesm wesm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@wesm wesm closed this in fdc8e6a Jul 27, 2018
@alok
Copy link
Contributor Author

alok commented Jul 27, 2018

@wesm Thanks for adding the test that I really should've.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants