Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to support graph batch-norm? #10

Closed
GaiYu0 opened this issue Jun 19, 2018 · 2 comments
Closed

How to support graph batch-norm? #10

GaiYu0 opened this issue Jun 19, 2018 · 2 comments

Comments

@GaiYu0
Copy link
Collaborator

GaiYu0 commented Jun 19, 2018

With current API, we can implement batch-norm by adding a shadow node and pulling/pushing globally. However, this may not be very user-friendly. An ideal syntax is

batch_norm(node_reprs['x'])

@jermainewang
Copy link
Member

jermainewang commented Jun 19, 2018 via email

@GaiYu0
Copy link
Collaborator Author

GaiYu0 commented Jun 19, 2018

Yes, we can always bypass our node-centric API to do batch-norm.

mufeili added a commit that referenced this issue Jan 25, 2022
* Update

* Update

* Fix

* Update

* Update

* Update

* Update

* Fix

* Update

* Update

* Update

* Update

* Fix lint

* lint

* Update

* Update

* lint fix

* Fix CI

* Fix

* Fix CI

* Update

* Fix

* Update

* Update

* Augmentation (#10)

* Update

* PPR

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* CI

* lint

* lint

* Update

* Update

* Fix AddEdge

* try import

* Update

* Fix

* CI

Co-authored-by: Ubuntu <ubuntu@ip-172-31-31-136.us-west-2.compute.internal>
Co-authored-by: Minjie Wang <wmjlyjemaine@gmail.com>
Qksidmx referenced this issue in Qksidmx/dgl Apr 25, 2022
GMNGeoffrey added a commit to GMNGeoffrey/dgl that referenced this issue Jan 29, 2025
These changes are on the original DGL source code. These plus running `script/hipify-inplace.sh` yields the hipified version of DGL which is identical to the code currently in nod-ai#9. The version in this commit should still run with CUDA.

This obviously shouldn't be merged into the same branch as PRs dmlc#1 through dmlc#9. The idea is that this would be the PR we would need for upstream (although I'm guessing they would actually want it in smaller chunks).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants