Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Proposal] Edge Group Apply API #332

Closed
VoVAllen opened this issue Jan 1, 2019 · 8 comments
Closed

[Feature Proposal] Edge Group Apply API #332

VoVAllen opened this issue Jan 1, 2019 · 8 comments
Assignees
Milestone

Comments

@VoVAllen
Copy link
Collaborator

VoVAllen commented Jan 1, 2019

🚀 Feature

Add edge group apply API

Motivation

Currently we don't support group apply out edges for each nodes. (in edges can be done with reduce function but not convinient enough)

Alternatives

No alternative for group apply out edges.

Proposal

g.group_apply_egdes(func=edge_func, field=''feat', direction='in')
edge_func is a function takes edges with shape (batch_size, num_edges, edges_feature_shape) and return a dictionary with new field name and value.
Or we can use built-in functions here such like fn.sparse_softmax(in='feat', out='normalized feat')

This is just a proposal without carefule thoughts. Any comment and suggestion is welcomed!

@yzh119
Copy link
Member

yzh119 commented Jan 1, 2019

What we need actually is a function that takes in src+edges or dst+edges features as input and returns a set of feature on edges.
It seems it's ok to view it as a kind of apply_edges function, and as you said we may specify a direction when declaring the function.

@VoVAllen
Copy link
Collaborator Author

VoVAllen commented Jan 2, 2019

This seems a case in apply_edges instead of group_apply?
Using src and dst to update the edges attribute shoule be apply_egdes(fn.src_mul_edge(src='h', edge='feat')), then using group_apply(fn.sparse_softmax(...)) to normalize them, in my thoughts.

Do you think consolidate these two API into one apply_edges would be better?

@VoVAllen
Copy link
Collaborator Author

VoVAllen commented Jan 2, 2019

@jermainewang @ylfdq1118
Could you give any suggestion or some guidance on how to implement this? Such as which file I may need to modify or look into? Thanks a lot!

@yzh119
Copy link
Member

yzh119 commented Jan 2, 2019

I've implemented a kernel for sparse_softmax, but in fact I'm more interested in the general form of group_apply, I don't think it's a good idea to write a kernel for each case.

@jermainewang
Copy link
Member

jermainewang commented Jan 2, 2019

edge_func is a function takes edges with shape (num_edges, edges_feature_shape) and return a dictionary with new field name and value.

Shouldn't it be (batch_size, num_in_edges, feature_shape)? The batch_size is the number of vertices that have the same in-degree. It is similar to the reduce function but returns edge data.

@lingfanyu
Copy link
Collaborator

I am thinking about having a group_apply as DGLGraph API and then have some builtin function (similar to send and recv) like fn.softmax, which binds to @yzh119 's sparse_softmax kernel (by the scheduler). For the case where no builtin is available, we do degree bucketing again.

As to whether we should merge apply_edge and group_apply into one, I think it's ok to merge, but there is a little bit subtlety here. My understanding is for apply_edge, edges are independent of each other (more parallelism), whereas in group_apply case, each group is independent of each other and we may have to bucket by degree again. But we still can merge them and leave the burden to scheduler.

@VoVAllen
Copy link
Collaborator Author

VoVAllen commented Jan 3, 2019

edge_func is a function takes edges with shape (num_edges, edges_feature_shape) and return a dictionary with new field name and value.

Shouldn't it be (batch_size, num_in_edges, feature_shape)? The batch_size is the number of vertices that have the same in-degree. It is similar to the reduce function but returns edge data.

Thanks! I've corrected it in my proposal.

@jermainewang jermainewang added this to the v0.2 milestone Jan 4, 2019
@lingfanyu
Copy link
Collaborator

Done in #358

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants